591 research outputs found

    Proceedings TLAD 2012:10th International Workshop on the Teaching, Learning and Assessment of Databases

    Get PDF
    This is the tenth in the series of highly successful international workshops on the Teaching, Learning and Assessment of Databases (TLAD 2012). TLAD 2012 is held on the 9th July at the University of Hertfordshire and hopes to be just as successful as its predecessors. The teaching of databases is central to all Computing Science, Software Engineering, Information Systems and Information Technology courses, and this year, the workshop aims to continue the tradition of bringing together both database teachers and researchers, in order to share good learning, teaching and assessment practice and experience, and further the growing community amongst database academics. As well as attracting academics and teachers from the UK community, the workshop has also been successful in attracting academics from the wider international community, through serving on the programme committee, and attending and presenting papers. Due to the healthy number of high quality submissions this year, the workshop will present eight peer reviewed papers. Of these, six will be presented as full papers and two as short papers. These papers cover a number of themes, including: the teaching of data mining and data warehousing, SQL and NoSQL, databases at school, and database curricula themselves. The final paper will give a timely ten-year review of TLAD workshops, and it is expected that these papers will lead to a stimulating closing discussion, which will continue beyond the workshop. We also look forward to a keynote presentation by Karen Fraser, who has contributed to many TLAD workshops as the HEA organizer. Titled “An Effective Higher Education Academy”, the keynote will discuss the Academy’s plans for the future and outline how participants can get involved

    Proceedings TLAD 2012:10th International Workshop on the Teaching, Learning and Assessment of Databases

    Get PDF
    This is the tenth in the series of highly successful international workshops on the Teaching, Learning and Assessment of Databases (TLAD 2012). TLAD 2012 is held on the 9th July at the University of Hertfordshire and hopes to be just as successful as its predecessors. The teaching of databases is central to all Computing Science, Software Engineering, Information Systems and Information Technology courses, and this year, the workshop aims to continue the tradition of bringing together both database teachers and researchers, in order to share good learning, teaching and assessment practice and experience, and further the growing community amongst database academics. As well as attracting academics and teachers from the UK community, the workshop has also been successful in attracting academics from the wider international community, through serving on the programme committee, and attending and presenting papers. Due to the healthy number of high quality submissions this year, the workshop will present eight peer reviewed papers. Of these, six will be presented as full papers and two as short papers. These papers cover a number of themes, including: the teaching of data mining and data warehousing, SQL and NoSQL, databases at school, and database curricula themselves. The final paper will give a timely ten-year review of TLAD workshops, and it is expected that these papers will lead to a stimulating closing discussion, which will continue beyond the workshop. We also look forward to a keynote presentation by Karen Fraser, who has contributed to many TLAD workshops as the HEA organizer. Titled “An Effective Higher Education Academy”, the keynote will discuss the Academy’s plans for the future and outline how participants can get involved

    Scalable Architecture for Integrated Batch and Streaming Analysis of Big Data

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Sciences, 2015As Big Data processing problems evolve, many modern applications demonstrate special characteristics. Data exists in the form of both large historical datasets and high-speed real-time streams, and many analysis pipelines require integrated parallel batch processing and stream processing. Despite the large size of the whole dataset, most analyses focus on specific subsets according to certain criteria. Correspondingly, integrated support for efficient queries and post- query analysis is required. To address the system-level requirements brought by such characteristics, this dissertation proposes a scalable architecture for integrated queries, batch analysis, and streaming analysis of Big Data in the cloud. We verify its effectiveness using a representative application domain - social media data analysis - and tackle related research challenges emerging from each module of the architecture by integrating and extending multiple state-of-the-art Big Data storage and processing systems. In the storage layer, we reveal that existing text indexing techniques do not work well for the unique queries of social data, which put constraints on both textual content and social context. To address this issue, we propose a flexible indexing framework over NoSQL databases to support fully customizable index structures, which can embed necessary social context information for efficient queries. The batch analysis module demonstrates that analysis workflows consist of multiple algorithms with different computation and communication patterns, which are suitable for different processing frameworks. To achieve efficient workflows, we build an integrated analysis stack based on YARN, and make novel use of customized indices in developing sophisticated analysis algorithms. In the streaming analysis module, the high-dimensional data representation of social media streams poses special challenges to the problem of parallel stream clustering. Due to the sparsity of the high-dimensional data, traditional synchronization method becomes expensive and severely impacts the scalability of the algorithm. Therefore, we design a novel strategy that broadcasts the incremental changes rather than the whole centroids of the clusters to achieve scalable parallel stream clustering algorithms. Performance tests using real applications show that our solutions for parallel data loading/indexing, queries, analysis tasks, and stream clustering all significantly outperform implementations using current state-of-the-art technologies

    Visões em bancos de dados de grafos : uma abordagem multifoco para dados heterogêneos

    Get PDF
    Orientador: Claudia Maria Bauzer MedeirosTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: A pesquisa científica tornou-se cada vez mais dependente de dados. Esse novo paradigma de pesquisa demanda técnicas e tecnologias computacionais sofisticadas para apoiar tanto o ciclo de vida dos dados científicos como a colaboração entre cientistas de diferentes áreas. Uma demanda recorrente em equipes multidisciplinares é a construção de múltiplas perspectivas sobre um mesmo conjunto de dados. Soluções atuais cobrem vários aspectos, desde o projeto de padrões de interoperabilidade ao uso de sistemas de gerenciamento de bancos de dados não-relacionais. Entretanto, nenhum desses esforços atende de forma adequada a necessidade de múltiplas perspectivas, denominadas focos nesta tese. Em termos gerais, um foco é projetado e construído para atender um determinado grupo de pesquisa (mesmo no escopo de um único projeto) que necessita manipular um subconjunto de dados de interesse em múltiplos níveis de agregação/generalização. A definição e criação de um foco são tarefas complexas que demandam mecanismos capazes de manipular múltiplas representações de um mesmo fenômeno do mundo real. O objetivo desta tese é prover múltiplos focos sobre dados heterogêneos. Para atingir esse objetivo, esta pesquisa se concentrou em quatro principais problemas. Os problemas inicialmente abordados foram: (1) escolher um paradigma de gerenciamento de dados adequado e (2) elencar os principais requisitos de pesquisas multifoco. Nossos resultados nos direcionaram para a adoção de bancos de dados de grafos como solução para o problema (1) e a utilização do conceito de visões, de bancos de dados relacionais, para o problema (2). Entretanto, não há consenso sobre um modelo de dados para bancos de dados de grafos e o conceito de visões é pouco explorado nesse contexto. Com isso, os demais problemas tratados por esta pesquisa são: (3) a especificação de um modelo de dados de grafos e (4) a definição de um framework para manipular visões em bancos de dados de grafos. Nossa pesquisa nesses quatro problemas resultaram nas contribuições principais desta tese: (i) apontar o uso de bancos de dados de grafos como camada de persistência em pesquisas multifoco - um tipo de banco de dados de esquema flexível e orientado a relacionamentos que provê uma ampla compreensão sobre as relações entre os dados; (ii) definir visões para bancos de dados de grafos como mecanismo para manipular múltiplos focos, considerando operações de manipulação de dados em grafos, travessias e algoritmos de grafos; (iii) propor um modelo de dados para grafos - baseado em grafos de propriedade - para lidar com a ausência de um modelo de dados pleno para grafos; (iv) especificar e implementar um framework, denominado Graph-Kaleidoscope, para prover o uso de visões em bancos de dados de grafos e (v) validar nosso framework com dados reais em aplicações distintas - em biodiversidade e em recursos naturais - dois típicos exemplos de pesquisas multidisciplinares que envolvem a análise de interações de fenômenos a partir de dados heterogêneosAbstract: Scientific research has become data-intensive and data-dependent. This new research paradigm requires sophisticated computer science techniques and technologies to support the life cycle of scientific data and collaboration among scientists from distinct areas. A major requirement is that researchers working in data-intensive interdisciplinary teams demand construction of multiple perspectives of the world, built over the same datasets. Present solutions cover a wide range of aspects, from the design of interoperability standards to the use of non-relational database management systems. None of these efforts, however, adequately meet the needs of multiple perspectives, which are called foci in the thesis. Basically, a focus is designed/built to cater to a research group (even within a single project) that needs to deal with a subset of data of interest, under multiple ggregation/generalization levels. The definition and creation of a focus are complex tasks that require mechanisms and engines to manipulate multiple representations of the same real world phenomenon. This PhD research aims to provide multiple foci over heterogeneous data. To meet this challenge, we deal with four research problems. The first two were (1) choosing an appropriate data management paradigm; and (2) eliciting multifocus requirements. Our work towards solving these problems made as choose graph databases to answer (1) and the concept of views in relational databases for (2). However, there is no consensual data model for graph databases and views are seldom discussed in this context. Thus, research problems (3) and (4) are: (3) specifying an adequate graph data model and (4) defining a framework to handle views on graph databases. Our research in these problems results in the main contributions of this thesis: (i) to present the case for the use of graph databases in multifocus research as persistence layer - a schemaless and relationship driven type of database that provides a full understanding of data connections; (ii) to define views for graph databases to support the need for multiple foci, considering graph data manipulation, graph algorithms and traversal tasks; (iii) to propose a property graph data model (PGDM) to fill the gap of absence of a full-fledged data model for graphs; (iv) to specify and implement a framework, named Graph-Kaleidoscope, that supports views over graph databases and (v) to validate our framework for real world applications in two domains - biodiversity and environmental resources - typical examples of multidisciplinary research that involve the analysis of interactions of phenomena using heterogeneous dataDoutoradoCiência da ComputaçãoDoutora em Ciência da Computaçã

    Big Data Management Challenges, Approaches, Tools and their limitations

    No full text
    International audienceBig Data is the buzzword everyone talks about. Independently of the application domain, today there is a consensus about the V's characterizing Big Data: Volume, Variety, and Velocity. By focusing on Data Management issues and past experiences in the area of databases systems, this chapter examines the main challenges involved in the three V's of Big Data. Then it reviews the main characteristics of existing solutions for addressing each of the V's (e.g., NoSQL, parallel RDBMS, stream data management systems and complex event processing systems). Finally, it provides a classification of different functions offered by NewSQL systems and discusses their benefits and limitations for processing Big Data

    Database Paradigms for Recordings Management

    Get PDF
    The relational database has long been considered the de facto standard for managing data in software applications. Today, a need for more scalable, flexible and distributed software solutions has led to the development of NoSQL database technologies that aim to replace the relational database in applications where such features are needed. In this thesis we have investigated the potential benefits of replacing SQLite, the database used by Axis Communications to manage recordings in their camera products, with a “Not only SQL” (NoSQL) database in an embedded camera system. To evaluate performance, test cases to measure execution times and resource consumption for database operations, based on important functionality in Axis’ storage solution, were designed. In the end the Embedded JSON Database Engine (EJDB) document database was identified. EJDB was found to be more efficient than SQLite at creating, updating and removing records. It was, however, less efficient when perform- ing queries based on conditional operators

    Key-Value Storage for handling data in mobile devices

    Get PDF
    In the current era of technology, computers have shrinked to the point that more than half of the world population always carries one with them - their mobile devices. These are used in all sorts of different activities, constantly generating information that needs to be stored or processed somewhere. To cope with the huge amounts of data generated by all of these devices, applications have resorted to Cloud services to provide them with the much needed computational and storage resources, but as these remote infrastructures still represented a bottleneck communication wise, a new paradigm has been emerging, Edge Computing. Instead of processing and storing all the data in more distant cloud services, the data is spread among mobile devices and edge servers connected in a shared network. In order to fully take advantage of the low latency times experienced in the Edge, applications still needed a distributed storage edge-oriented system, capable of handling the contents generated by all of these mobile devices. The current state-of-the-art storage systems are able to provide these applications with a storing platform that uses mobile devices or edge servers as data storing points, but neither uses both. In this thesis we propose a Key-Value Edge Storage System named Basil, that uses both mobile devices and edge infrastructures as nodes of the system, capable of providing users from different locations with a cohesive and consistent distributed storage system. Furthermore, we will test our KV store against existing NoSQL storage models deployed in the edge, as well as its own performance while varying the number of nodes it relies on

    Real-Time intelligence

    Get PDF
    Dissertação de mestrado em Computer ScienceOver the past 20 years, data has increased in a large scale in various fields. This explosive increase of global data led to the coin of the term Big Data. Big data is mainly used to describe enormous datasets that typically includes masses of unstructured data that may need real-time analysis. This paradigm brings important challenges on tasks like data acquisition, storage and analysis. The ability to perform these tasks efficiently got the attention of researchers as it brings a lot of oportunities for creating new value. Another topic with growing importance is the usage of biometrics, that have been used in a wide set of application areas as, for example, healthcare and security. In this work it is intended to handle the data pipeline of data generated by a large scale biometrics application providing basis for real-time analytics and behavioural classification. The challenges regarding analytical queries (with real-time requirements, due to the need of monitoring the metrics/behavior) and classifiers’ training are particularly addressed.Nos os últimos 20 anos, a quantidade de dados armazenados e passíveis de serem processados, tem vindo a aumentar em áreas bastante diversas. Este aumento explosivo, aliado às potencialidades que surgem como consequência do mesmo, levou ao aparecimento do termo Big Data. Big Data abrange essencialmente grandes volumes de dados, possivelmente com pouca estrutura e com necessidade de processamento em tempo real. As especificidades apresentadas levaram ao aparecimento de desafios nas diversas tarefas do pipeline típico de processamento de dados como, por exemplo, a aquisição, armazenamento e a análise. A capacidade de realizar estas tarefas de uma forma eficiente tem sido alvo de estudo tanto pela indústria como pela comunidade académica, abrindo portas para a criação de valor. Uma outra área onde a evolução tem sido notória é a utilização de biométricas comportamentais que tem vindo a ser cada vez mais acentuada em diferentes cenários como, por exemplo, na área dos cuidados de saúde ou na segurança. Neste trabalho um dos objetivos passa pela gestão do pipeline de processamento de dados de uma aplicação de larga escala, na área das biométricas comportamentais, de forma a possibilitar a obtenção de métricas em tempo real sobre os dados (viabilizando a sua monitorização) e a classificação automática de registos sobre fadiga na interação homem-máquina (em larga escala)
    corecore