12 research outputs found

    Dynamic topic herarchies and segmented rankings in textual OLAP technology.

    Get PDF
    Programa de P?s-Gradua??o em Ci?ncia da Computa??o. Departamento de Ci?ncia da Computa??o, Instituto de Ci?ncias Exatas e Biol?gicas, Universidade Federal de Ouro Preto.A tecnologia OLAP tem se consolidado h? 20 anos e recentemente foi redesenhada para que suas dimens?es, hierarquias e medidas possam suportar as particularidades dos dados textuais. A tarefa de organizar dados textuais de forma hier?rquica pode ser resolvida com a constru??o de hierarquias de t?picos. Atualmente, a hierarquia de t?picos ? definida apenas uma vez no cubo de dados, ou seja, para todo o \textit{lattice} de cuboides. No entanto, tal hierarquia ? sens?vel ao conte?do da cole??o de documentos, portanto em um mesmo cubo de dados podem existir c?lulas com conte?dos completamente diferentes, agregando cole??es de documentos distintas, provocando potenciais altera??es na hierarquia de t?picos. Al?m disso, o segmento de texto utilizado na an?lise OLAP tamb?m influencia diretamente nos t?picos elencados por tal hierarquia. Neste trabalho, apresentamos um cubo de dados textual com m?ltiplas e din?micas hierarquias de t?picos. M?ltiplas por serem constru?das a partir de diferentes segmentos de texto e din?micas por serem constru?das para cada c?lula do cubo. Outra contribui??o deste trabalho refere-se ? resposta das consultas multidimensionais. O estado da arte normalmente retorna os top-k documentos mais relevantes para um determinado t?pico. Vamos al?m disso, retornando outros segmentos de texto, como os t?tulos mais significativos, resumos e par?grafos. A abordagem ? projetada em quatro etapas adicionais, onde cada passo atenua um pouco mais o impacto da constru??o de v?rias hierarquias de t?picos e rankings de segmentos por c?lula de cubo. Experimentos que utilizam parte dos documentos da DBLP como uma cole??o de documentos refor?am nossas hip?teses.The OLAP technology emerged 20 years ago and recently has been redesigned so that its dimensions, hierarchies and measures can support the particularities of textual data. Organizing textual data hierarchically can be solved with topic hierarchies. Currently, the topic hierarchy is de ned only once in the data cube, e.g., forthe entire lattice of cubo ids. However, such hierarchy is sensitive to the document collection content. Thus, a data cube cell can contain a collection of documents distinct fromothers in the same cube, causing potential changes in the topic hierarchy. Further more, the text segment used in OLAP analysis also changes this hierarchy. In this work, we present a textual data cube with multiple dynamic topic hierarchies for each cube cell. Multiple hierarchies, since the presented approach builds a topic hierarchy per text segment. Another contribution of this work refers to query response. The state-of-the-art normally returns the top-k documents to the topic selected in the query. We go beyond by returning other text segments, such as the most signi cant titles, abstracts and paragraphs. The approach is designed in four complementary steps and each step attenuates a bit more the impact of building multiple to pic hierarchies and segmented rankings per cube cell. Experiments using part of the DBLP papers as a document collection reinforce our hypotheses

    Enterprise Data Mining & Machine Learning Framework on Cloud Computing for Investment Platforms

    Get PDF
    Machine Learning and Data Mining are two key components in decision making systems which can provide valuable in-sights quickly into huge data set. Turning raw data into meaningful information and converting it into actionable tasks makes organizations profitable and sustain immense competition. In the past decade we saw an increase in Data Mining algorithms and tools for financial market analysis, consumer products, manufacturing, insurance industry, social networks, scientific discoveries and warehousing. With vast amount of data available for analysis, the traditional tools and techniques are outdated for data analysis and decision support. Organizations are investing considerable amount of resources in the area of Data Mining Frameworks in order to emerge as market leaders. Machine Learning is a natural evolution of Data Mining. The existing Machine Learning techniques rely heavily on the underlying Data Mining techniques in which the Patterns Recognition is an essential component. Building an efficient Data Mining Framework is expensive and usually culminates in multi-year project for the organizations. The organization pay a heavy price for any delay or inefficient Data Mining foundation. In this research, we propose to build a cost effective and efficient Data Mining (DM) and Machine Learning (ML) Framework on cloud computing environment to solve the inherent limitations in the existing design methodologies. The elasticity of the cloud architecture solves the hardware constraint on businesses. Our research is focused on refining and enhancing the current Data Mining frameworks to build an enterprise data mining and machine learning framework. Our initial studies and techniques produced very promising results by reducing the existing build time considerably. Our technique of dividing the DM and ML Frameworks into several individual components (5 sub components) which can be reused at several phases of the final enterprise build is efficient and saves operational costs to the organization. Effective Aggregation using selective cuboids and parallel computations using Azure Cloud Services are few of many proposed techniques in our research. Our research produced a nimble, scalable portable architecture for enterprise wide implementation of DM and ML frameworks

    Linked Open Data - Creating Knowledge Out of Interlinked Data: Results of the LOD2 Project

    Get PDF
    Database Management; Artificial Intelligence (incl. Robotics); Information Systems and Communication Servic

    Proceedings of the Scientific Data Compression Workshop

    Get PDF
    Continuing advances in space and Earth science requires increasing amounts of data to be gathered from spaceborne sensors. NASA expects to launch sensors during the next two decades which will be capable of producing an aggregate of 1500 Megabits per second if operated simultaneously. Such high data rates cause stresses in all aspects of end-to-end data systems. Technologies and techniques are needed to relieve such stresses. Potential solutions to the massive data rate problems are: data editing, greater transmission bandwidths, higher density and faster media, and data compression. Through four subpanels on Science Payload Operations, Multispectral Imaging, Microwave Remote Sensing and Science Data Management, recommendations were made for research in data compression and scientific data applications to space platforms

    Adaptive P2P platform for data sharing

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Serving Datacube Tuples from Main Memory

    Get PDF
    Existing datacube precomputation schemes materialize selected datacube tuples on disk, choosing the most beneficial cuboids (i.e., combinations of dimensions) to materialize given a space limit. However, in the context of a data-warehouse receiving frequent "append" updates to the database, the cost of keeping these disk-resident cuboids up-to-date can be high. In this paper, we propose a main memory based framework which provides rapid response to queries and requires considerably less maintenance cost than a disk based scheme in an append-only environment. For a given datacube query, we first look among a set of previously materialized tuples for a direct answer. If not found, we use a hash based scheme reminiscent of partial match retrieval to rapidly compute the answer to the query from the finest-level data stored in a special in-memory data structure. Our approach is limited to the important class of applications in which the finest granularity tuples of the datacube fit in main ..
    corecore