20 research outputs found

    Data Warehousing in the Cloud

    Get PDF
    Um data warehouse, mais que um conceito, é um sistema concebido para armazenar a informação relacionada com as atividades de uma organização de forma consolidada e que sirva de ponto único para toda e qualquer relatório ou análise que possa ser efetuada. Este sistema possibilita a análise de grandes volumes de informação que tipicamente têm origem nos sistemas transacionais de uma organização (OLTP – Online Transaction Processing). Este conceito surgiu da necessidade de integrar dados corporativos espalhados pelos vários servidores aplicacionais que uma organização possa ter, para que fosse possível tornar os dados acessíveis a todos os utilizadores que necessitam de consumir informação e tomar decisões com base nela. Com o surgimento de cada vez mais dados, surgiu também a necessidade de os analisar. No entanto os sistemas de data warehouse atuais não têm a capacidade suficiente para o tratamento da quantidade enorme de dados que atualmente é produzida e que necessita de ser tratada e analisada. Surge então o conceito de cloud computing. Cloud computing é um modelo que permite o acesso ubíquo e a pedido, através da Internet, a um conjunto de recursos de computação partilhados ou não (tais como redes, servidores ou armazenamento) que podem ser rapidamente aprovisionados ou libertados apenas com um simples pedido e sem intervenção humana para disponibilizar/libertar. Neste modelo, os recursos são praticamente ilimitados e em funcionamento conjunto debitam um poder de computação muito elevado que pode e deve ser utilizado para os mais variados fins. Da conjugação de ambos estes conceitos, surge o cloud data warehouse que eleva a forma como os sistemas tradicionais de data warehouse são definidos ao permitir que as suas fontes possam estar localizada em qualquer lugar desde que acessível pela Internet, tirando também partido do grande poder computacional de uma infraestrutura na nuvem. Apesar das vantagens reconhecidas, há ainda alguns desafios sendo dois dos mais sonantes a segurança e a forma como os dados são transferidos para a nuvem. Nesta dissertação foi feito um estudo comparativo entre variadas soluções de data warehouse na cloud com o objectivo de recomendar a melhor solução de entre as estudadas e alvo de testes. Foi feita uma avaliação com base em critérios da Gartner e num inquérito sobre o tema. Desta primeira avaliação surgiram as duas soluções que foram alvo de uma comparação mais fina e sobre as quais foram feitos os testes cuja avaliação ditou a recomendação.A data warehouse, rather than a concept, is a system designed to store the information related to the activities of an organization in a consolidated way and that serves as a single point of truth for any report or analysis that can be carried out. It enables the analysis of large amounts of information that typically comes from the organization's transactional systems (OLTP). This concept arose from the need to integrate corporate data across multiple application servers that an organization might have, so that it would be possible to make data accessible to all users who need to consume information and make decisions based on it. With the appearance of more and more data, there has also been a need to analyze it. However, today's data warehouse systems do not have the capacity to handle the huge amount of data that is currently produced and needs to be handled or analyzed. Then comes the concept of cloud computing. Cloud computing is a model that enables ubiquitous and on-demand access to a set of shared or non-shared computing resources (such as networks, servers, or storage) that can be quickly provisioned or released only with a simple request and without human intervention to get it done. In this model, the features are almost unlimited and in working together they bring a very high computing power that can and should be used for the most varied purposes. From the combination of both these concepts, emerges the cloud data warehouse. It elevates the way traditional data warehouse systems are defined by allowing their sources to be located anywhere as long as it is accessible through the Internet, also taking advantage of the great computational power of an infrastructure in the cloud. Despite the recognized advantages, there are still some challenges. Two of the most important are the security and the way data is transferred to the cloud. In this dissertation a comparative study between several data warehouse solutions in the cloud was carried out with the aim of recommending the best solution among the studied solutions. An assessment was made based on Gartner criteria and a survey on the subject. From this first evaluation came the two solutions that were the target of a finer comparison and on which the tests whose assessment dictated the recommendation were made

    Experimental evaluation of big data querying tools

    Get PDF
    Nos últimos anos, o termo Big Data tornou-se um tópico bastanta debatido em várias áreas de negócio. Um dos principais desafios relacionados com este conceito é como lidar com o enorme volume e variedade de dados de forma eficiente. Devido à notória complexidade e volume de dados associados ao conceito de Big Data, são necessários mecanismos de consulta eficientes para fins de análise de dados. Motivado pelo rápido desenvolvimento de ferramentas e frameworks para Big Data, há muita discussão sobre ferramentas de consulta e, mais especificamente, quais são as mais apropriadas para necessidades analíticas específica. Esta dissertação descreve e compara as principais características e arquiteturas das seguintes conhecidas ferramentas analíticas para Big Data: Drill, HAWQ, Hive, Impala, Presto e Spark. Para testar o desempenho dessas ferramentas analíticas para Big Data, descrevemos também o processo de preparação, configuração e administração de um Cluster Hadoop para que possamos instalar e utilizar essas ferramentas, tendo um ambiente capaz de avaliar seu desempenho e identificar quais cenários mais adequados à sua utilização. Para realizar esta avaliação, utilizamos os benchmarks TPC-H e TPC-DS, onde os resultados mostraram que as ferramentas de processamento em memória como HAWQ, Impala e Presto apresentam melhores resultados e desempenho em datasets de dimensão baixa e média. No entanto, as ferramentas que apresentaram tempos de execuções mais lentas, especialmente o Hive, parecem apanhar as ferramentas de melhor desempenho quando aumentamos os datasets de referência

    Benchmarking Big Data SQL Frameworks

    Get PDF

    The Effects of Advanced Analytics and Machine Learning on the Transportation of Natural Gas

    Get PDF
    This qualitative single case study describes the effects of an advanced analytic and machine learning system (AAML) has on the transportation of natural gas pipelines and the causes for failure to fully utilize the advanced analytic and machine learning system. This study\u27s guiding theory was the Unified Theory of Acceptance and Use of Technology (UTAUT) model and Transformation Leadership. The factors for failure to fully utilize AAML systems were studied, and the factors that made the AAML system successful were also examined. Data were collected through participant interviews. This study indicates that the primary factors for failure to fully utilize AAML systems are training and resource allocation. The AAML system successfully increased the participants\u27 productivity and analytical abilities by eliminating the many manual steps involved in producing reports and analyzing business conditions. The AAML system also allowed the organization to gather and analyze real-time data in a volume and manner that would have been impossible before the AAML system was installed. The leadership team brought about the AAML system\u27s success through transformation leadership by encouraging creativity, spurring innovation while providing the proper funding, time, and personnel to support the AAML system

    Big Data

    Get PDF
    Η εργασία στοχεύει στην ανάλυση της αγοράς των μεγάλων δεδομένων, Περιλαμβάνονται οι πάροχοι μαζί με κάποιες ενδιαφέρουσες περιπτώσεις χρήσης.Nowadays, term big data, draws a lot of attention, both for Business and person perspective. For decades, companies have been making business decisions through its Business Intelligence department, based on transactional data which were basically stored in relational databases. However, regulatory compliance, increased competition, and other pressures have created an insatiable need for companies to accumulate and analyze large, fast-growing quantities of data that was beyond the critical data

    Towards a big data reference architecture

    Get PDF

    Augmenting data warehousing architectures with hadoop

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementAs the volume of available data increases exponentially, traditional data warehouses struggle to transform this data into actionable knowledge. Data strategies that include the creation and maintenance of data warehouses have a lot to gain by incorporating technologies from the Big Data’s spectrum. Hadoop, as a transformation tool, can add a theoretical infinite dimension of data processing, feeding transformed information into traditional data warehouses that ultimately will retain their value as central components in organizations’ decision support systems. This study explores the potentialities of Hadoop as a data transformation tool in the setting of a traditional data warehouse environment. Hadoop’s execution model, which is oriented for distributed parallel processing, offers great capabilities when the amounts of data to be processed require the infrastructure to expand. Horizontal scalability, which is a key aspect in a Hadoop cluster, will allow for proportional growth in processing power as the volume of data increases. Through the use of a Hive on Tez, in a Hadoop cluster, this study transforms television viewing events, extracted from Ericsson’s Mediaroom Internet Protocol Television infrastructure, into pertinent audience metrics, like Rating, Reach and Share. These measurements are then made available in a traditional data warehouse, supported by a traditional Relational Database Management System, where they are presented through a set of reports. The main contribution of this research is a proposed augmented data warehouse architecture where the traditional ETL layer is replaced by a Hadoop cluster, running Hive on Tez, with the purpose of performing the heaviest transformations that convert raw data into actionable information. Through a typification of the SQL statements, responsible for the data transformation processes, we were able to understand that Hadoop, and its distributed processing model, delivers outstanding performance results associated with the analytical layer, namely in the aggregation of large data sets. Ultimately, we demonstrate, empirically, the performance gains that can be extracted from Hadoop, in comparison to an RDBMS, regarding speed, storage usage and scalability potential, and suggest how this can be used to evolve data warehouses into the age of Big Data

    Fourth NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of all those technical papers received in time for publication just prior to the Fourth Goddard Conference on Mass Storage and Technologies, held March 28-30, 1995, at the University of Maryland, University College Conference Center, in College Park, Maryland. This series of conferences continues to serve as a unique medium for the exchange of information on topics relating to the ingestion and management of substantial amounts of data and the attendant problems involved. This year's discussion topics include new storage technology, stability of recorded media, performance studies, storage system solutions, the National Information infrastructure (Infobahn), the future for storage technology, and lessons learned from various projects. There also will be an update on the IEEE Mass Storage System Reference Model Version 5, on which the final vote was taken in July 1994

    Modern data analytics in the cloud era

    Get PDF
    Cloud Computing ist die dominante Technologie des letzten Jahrzehnts. Die Benutzerfreundlichkeit der verwalteten Umgebung in Kombination mit einer nahezu unbegrenzten Menge an Ressourcen und einem nutzungsabhängigen Preismodell ermöglicht eine schnelle und kosteneffiziente Projektrealisierung für ein breites Nutzerspektrum. Cloud Computing verändert auch die Art und Weise wie Software entwickelt, bereitgestellt und genutzt wird. Diese Arbeit konzentriert sich auf Datenbanksysteme, die in der Cloud-Umgebung eingesetzt werden. Wir identifizieren drei Hauptinteraktionspunkte der Datenbank-Engine mit der Umgebung, die veränderte Anforderungen im Vergleich zu traditionellen On-Premise-Data-Warehouse-Lösungen aufweisen. Der erste Interaktionspunkt ist die Interaktion mit elastischen Ressourcen. Systeme in der Cloud sollten Elastizität unterstützen, um den Lastanforderungen zu entsprechen und dabei kosteneffizient zu sein. Wir stellen einen elastischen Skalierungsmechanismus für verteilte Datenbank-Engines vor, kombiniert mit einem Partitionsmanager, der einen Lastausgleich bietet und gleichzeitig die Neuzuweisung von Partitionen im Falle einer elastischen Skalierung minimiert. Darüber hinaus führen wir eine Strategie zum initialen Befüllen von Puffern ein, die es ermöglicht, skalierte Ressourcen unmittelbar nach der Skalierung auszunutzen. Cloudbasierte Systeme sind von fast überall aus zugänglich und verfügbar. Daten werden häufig von zahlreichen Endpunkten aus eingespeist, was sich von ETL-Pipelines in einer herkömmlichen Data-Warehouse-Lösung unterscheidet. Viele Benutzer verzichten auf die Definition von strikten Schemaanforderungen, um Transaktionsabbrüche aufgrund von Konflikten zu vermeiden oder um den Ladeprozess von Daten zu beschleunigen. Wir führen das Konzept der PatchIndexe ein, die die Definition von unscharfen Constraints ermöglichen. PatchIndexe verwalten Ausnahmen zu diesen Constraints, machen sie für die Optimierung und Ausführung von Anfragen nutzbar und bieten effiziente Unterstützung bei Datenaktualisierungen. Das Konzept kann auf beliebige Constraints angewendet werden und wir geben Beispiele für unscharfe Eindeutigkeits- und Sortierconstraints. Darüber hinaus zeigen wir, wie PatchIndexe genutzt werden können, um fortgeschrittene Constraints wie eine unscharfe Multi-Key-Partitionierung zu definieren, die eine robuste Anfrageperformance bei Workloads mit unterschiedlichen Partitionsanforderungen bietet. Der dritte Interaktionspunkt ist die Nutzerinteraktion. Datengetriebene Anwendungen haben sich in den letzten Jahren verändert. Neben den traditionellen SQL-Anfragen für Business Intelligence sind heute auch datenwissenschaftliche Anwendungen von großer Bedeutung. In diesen Fällen fungiert das Datenbanksystem oft nur als Datenlieferant, während der Rechenaufwand in dedizierten Data-Science- oder Machine-Learning-Umgebungen stattfindet. Wir verfolgen das Ziel, fortgeschrittene Analysen in Richtung der Datenbank-Engine zu verlagern und stellen das Grizzly-Framework als DataFrame-zu-SQL-Transpiler vor. Auf dieser Grundlage identifizieren wir benutzerdefinierte Funktionen (UDFs) und maschinelles Lernen (ML) als wichtige Aufgaben, die von einer tieferen Integration in die Datenbank-Engine profitieren würden. Daher untersuchen und bewerten wir Ansätze für die datenbankinterne Ausführung von Python-UDFs und datenbankinterne ML-Inferenz.Cloud computing has been the groundbreaking technology of the last decade. The ease-of-use of the managed environment in combination with nearly infinite amount of resources and a pay-per-use price model enables fast and cost-efficient project realization for a broad range of users. Cloud computing also changes the way software is designed, deployed and used. This thesis focuses on database systems deployed in the cloud environment. We identify three major interaction points of the database engine with the environment that show changed requirements compared to traditional on-premise data warehouse solutions. First, software is deployed on elastic resources. Consequently, systems should support elasticity in order to match workload requirements and be cost-effective. We present an elastic scaling mechanism for distributed database engines, combined with a partition manager that provides load balancing while minimizing partition reassignments in the case of elastic scaling. Furthermore we introduce a buffer pre-heating strategy that allows to mitigate a cold start after scaling and leads to an immediate performance benefit using scaling. Second, cloud based systems are accessible and available from nearly everywhere. Consequently, data is frequently ingested from numerous endpoints, which differs from bulk loads or ETL pipelines in a traditional data warehouse solution. Many users do not define database constraints in order to avoid transaction aborts due to conflicts or to speed up data ingestion. To mitigate this issue we introduce the concept of PatchIndexes, which allow the definition of approximate constraints. PatchIndexes maintain exceptions to constraints, make them usable in query optimization and execution and offer efficient update support. The concept can be applied to arbitrary constraints and we provide examples of approximate uniqueness and approximate sorting constraints. Moreover, we show how PatchIndexes can be exploited to define advanced constraints like an approximate multi-key partitioning, which offers robust query performance over workloads with different partition key requirements. Third, data-centric workloads changed over the last decade. Besides traditional SQL workloads for business intelligence, data science workloads are of significant importance nowadays. For these cases the database system might only act as data delivery, while the computational effort takes place in data science or machine learning (ML) environments. As this workflow has several drawbacks, we follow the goal of pushing advanced analytics towards the database engine and introduce the Grizzly framework as a DataFrame-to-SQL transpiler. Based on this we identify user-defined functions (UDFs) and machine learning inference as important tasks that would benefit from a deeper engine integration and investigate approaches to push these operations towards the database engine

    Development of a parallel database environment

    Get PDF
    corecore