218 research outputs found

    Ranked Retrieval in Uncertain and Probabilistic Databases

    Get PDF
    Ranking queries are widely used in data exploration, data analysis and decision making scenarios. While most of the currently proposed ranking techniques focus on deterministic data, several emerging applications involve data that are imprecise or uncertain. Ranking uncertain data raises new challenges in query semantics and processing, making conventional methods inapplicable. Furthermore, the interplay between ranking and uncertainty models introduces new dimensions for ordering query results that do not exist in the traditional settings. This dissertation introduces new formulations and processing techniques for ranking queries on uncertain data. The formulations are based on marriage of traditional ranking semantics with possible worlds semantics under widely-adopted uncertainty models. In particular, we focus on studying the impact of tuple-level and attribute-level uncertainty on the semantics and processing techniques of ranking queries. Under the tuple-level uncertainty model, we introduce a processing framework leveraging the capabilities of relational database systems to recognize and handle data uncertainty in score-based ranking. The framework encapsulates a state space model, and efficient search algorithms that compute query answers by lazily materializing the necessary parts of the space. Under the attribute-level uncertainty model, we give a new probabilistic ranking model, based on partial orders, to encapsulate the space of possible rankings originating from uncertainty in attribute values. We present a set of efficient query evaluation algorithms, including sampling-based techniques based on the theory of Markov chains and Monte-Carlo method, to compute query answers. We build on our techniques for ranking under attribute-level uncertainty to support rank join queries on uncertain data. We show how to extend current rank join methods to handle uncertainty in scoring attributes. We provide a pipelined query operator implementation of uncertainty-aware rank join algorithm integrated with sampling techniques to compute query answers

    DAMIA: a data mashup fabric for intranet applications

    Get PDF
    Damia is a lightweight enterprise data integration service where line of business users can create and catalog high value data feeds for consumption by situational applications. Damia is inspired by the Web 2.0 mashup phenomenon. It consists of (1) a browserbased user-interface that allows for the specification of data mashups as data flow graphs using a set of operators, (2) a server with an execution engine, as well as (3) APIs for searching, debugging, executing and managing mashups. Damia offers a framework and functionality for dynamic entity resolution, streaming and other higher value features particularly important in the enterprise domain. Damia is currently in perpetual beta in the IBM Intranet. In this demonstration, we showcase the creation and execution of several enterprise data mashups, thereby illustrating the architecture and features of the overall Damia system

    Web 2.0 Technologies in the Software Development Process.

    Get PDF
    Software engineers must communicate with many different people, likely in different locations, in order to create a successful piece of software. Social media can be used to communicate quickly and efficiently to minimize miscommunications and facilitate collaboration in the software development process. Research in this area has been sparse but significant because initial findings show that social media is being used in innovative ways to improve software development. Surveys of what social media some companies are currently using along with information about new social media systems indicate possible uses for these technologies on future software development projects such as documentation maintenance, employee training, and predicting and thus preventing build failures

    Query-Time Data Integration

    Get PDF
    Today, data is collected in ever increasing scale and variety, opening up enormous potential for new insights and data-centric products. However, in many cases the volume and heterogeneity of new data sources precludes up-front integration using traditional ETL processes and data warehouses. In some cases, it is even unclear if and in what context the collected data will be utilized. Therefore, there is a need for agile methods that defer the effort of integration until the usage context is established. This thesis introduces Query-Time Data Integration as an alternative concept to traditional up-front integration. It aims at enabling users to issue ad-hoc queries on their own data as if all potential other data sources were already integrated, without declaring specific sources and mappings to use. Automated data search and integration methods are then coupled directly with query processing on the available data. The ambiguity and uncertainty introduced through fully automated retrieval and mapping methods is compensated by answering those queries with ranked lists of alternative results. Each result is then based on different data sources or query interpretations, allowing users to pick the result most suitable to their information need. To this end, this thesis makes three main contributions. Firstly, we introduce a novel method for Top-k Entity Augmentation, which is able to construct a top-k list of consistent integration results from a large corpus of heterogeneous data sources. It improves on the state-of-the-art by producing a set of individually consistent, but mutually diverse, set of alternative solutions, while minimizing the number of data sources used. Secondly, based on this novel augmentation method, we introduce the DrillBeyond system, which is able to process Open World SQL queries, i.e., queries referencing arbitrary attributes not defined in the queried database. The original database is then augmented at query time with Web data sources providing those attributes. Its hybrid augmentation/relational query processing enables the use of ad-hoc data search and integration in data analysis queries, and improves both performance and quality when compared to using separate systems for the two tasks. Finally, we studied the management of large-scale dataset corpora such as data lakes or Open Data platforms, which are used as data sources for our augmentation methods. We introduce Publish-time Data Integration as a new technique for data curation systems managing such corpora, which aims at improving the individual reusability of datasets without requiring up-front global integration. This is achieved by automatically generating metadata and format recommendations, allowing publishers to enhance their datasets with minimal effort. Collectively, these three contributions are the foundation of a Query-time Data Integration architecture, that enables ad-hoc data search and integration queries over large heterogeneous dataset collections

    Proceedings of the GIS Research UK 18th Annual Conference GISRUK 2010

    Get PDF
    This volume holds the papers from the 18th annual GIS Research UK (GISRUK). This year the conference, hosted at University College London (UCL), from Wednesday 14 to Friday 16 April 2010. The conference covered the areas of core geographic information science research as well as applications domains such as crime and health and technological developments in LBS and the geoweb. UCL’s research mission as a global university is based around a series of Grand Challenges that affect us all, and these were accommodated in GISRUK 2010. The overarching theme this year was “Global Challenges”, with specific focus on the following themes: * Crime and Place * Environmental Change * Intelligent Transport * Public Health and Epidemiology * Simulation and Modelling * London as a global city * The geoweb and neo-geography * Open GIS and Volunteered Geographic Information * Human-Computer Interaction and GIS Traditionally, GISRUK has provided a platform for early career researchers as well as those with a significant track record of achievement in the area. As such, the conference provides a welcome blend of innovative thinking and mature reflection. GISRUK is the premier academic GIS conference in the UK and we are keen to maintain its outstanding record of achievement in developing GIS in the UK and beyond

    A human factors perspective on volunteered geographic information

    Get PDF
    This thesis takes a multidisciplinary approach to understanding the unique abilities of Volunteered Geographic Information (VGI) to enhance the utility of online mashups in ways not achievable with Professional Geographic Information (PGI). The key issues currently limiting the use of successful of VGI are the concern for quality, accuracy and value of the information, as well as the polarisation and bias of views within the user community. This thesis reviews different theoretical approaches in Human Factors, Geography, Information Science and Computer Science to help understand the notion of user judgements relative to VGI within an online environment (Chapter 2). Research methods relevant to a human factors investigation are also discussed (Chapter 3). (Chapter 5) The scoping study established the fundamental insights into the terminology and nature of VGI and PGI, a range of users were engaged through a series of qualitative interviews. This led the development of a framework on VGI (Chapter 4), and comparative description of users in relation to one another through a value framework (Chapter 5). Study Two produced qualitative multi-methods investigation into how users perceive VGI and PGI in use (Chapter 6), demonstrating similarities and the unique ability for VGI to provide utility to consumers. Chapter Seven and Study Three brought insight into the specific abilities for VGI to enhance the user judgement of online information within an information relevance context (Chapter 7 and 8). In understanding the outcomes of these studies, this thesis discusses how users perceive VGI as different from PGI in terms of its benefit to consumers from a user centred design perspective (Chapter 9). In particular, the degree to which user concerns are valid, the limitation of VGI in application and its potential strengths in enriching the user experiences of consumers engaged within an information search. In conclusion, specific contributions and avenues for further work are highlighted (Chapter 10)

    Query Optimization for On-Demand Information Extraction Tasks over Text Databases

    Get PDF
    Many modern applications involve analyzing large amounts of data that comes from unstructured text documents. In its original format, data contains information that, if extracted, can give more insight and help in the decision-making process. The ability to answer structured SQL queries over unstructured data allows for more complex data analysis. Querying unstructured data can be accomplished with the help of information extraction (IE) techniques. The traditional way is by using the Extract-Transform-Load (ETL) approach, which performs all possible extractions over the document corpus and stores the extracted relational results in a data warehouse. Then, the extracted data is queried. The ETL approach produces results that are out of date and causes an explosion in the number of possible relations and attributes to extract. Therefore, new approaches to perform extraction on-the-fly were developed; however, previous efforts relied on specialized extraction operators, or particular IE algorithms, which limited the optimization opportunities of such queries. In this work, we propose an on-line approach that integrates the engine of the database management system with IE systems using a new type of view called extraction views. Queries on text documents are evaluated using these extraction views, which get populated at query-time with newly extracted data. Our approach enables the optimizer to apply all well-defined optimization techniques. The optimizer selects the best execution plan using a defined cost model that considers a user-defined balance between the cost and quality of extraction, and we explain the trade-off between the two factors. The main contribution is the ability to run on-demand information extraction to consider latest changes in the data, while avoiding unnecessary extraction from irrelevant text documents

    Automatic Extraction and Assessment of Entities from the Web

    Get PDF
    The search for information about entities, such as people or movies, plays an increasingly important role on the Web. This information is still scattered across many Web pages, making it more time consuming for a user to find all relevant information about an entity. This thesis describes techniques to extract entities and information about these entities from the Web, such as facts, opinions, questions and answers, interactive multimedia objects, and events. The findings of this thesis are that it is possible to create a large knowledge base automatically using a manually-crafted ontology. The precision of the extracted information was found to be between 75–90 % (facts and entities respectively) after using assessment algorithms. The algorithms from this thesis can be used to create such a knowledge base, which can be used in various research fields, such as question answering, named entity recognition, and information retrieval
    corecore