36,042 research outputs found

    Challenges of Internet of Things and Big Data Integration

    Full text link
    The Internet of Things anticipates the conjunction of physical gadgets to the In-ternet and their access to wireless sensor data which makes it expedient to restrain the physical world. Big Data convergence has put multifarious new opportunities ahead of business ventures to get into a new market or enhance their operations in the current market. considering the existing techniques and technologies, it is probably safe to say that the best solution is to use big data tools to provide an analytical solution to the Internet of Things. Based on the current technology deployment and adoption trends, it is envisioned that the Internet of Things is the technology of the future, while to-day's real-world devices can provide real and valuable analytics, and people in the real world use many IoT devices. Despite all the advertisements that companies offer in connection with the Internet of Things, you as a liable consumer, have the right to be suspicious about IoT advertise-ments. The primary question is: What is the promise of the Internet of things con-cerning reality and what are the prospects for the future.Comment: Proceedings of the International Conference on International Conference on Emerging Technologies in Computing 2018 (iCETiC '18), 23rd -24th August, 2018, at London Metropolitan University, London, UK, Published by Springer-Verla

    CERN openlab Whitepaper on Future IT Challenges in Scientific Research

    Get PDF
    This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates

    Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure

    Full text link
    Big data research has attracted great attention in science, technology, industry and society. It is developing with the evolving scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies. However, its nature and fundamental challenge have not been recognized, and its own methodology has not been formed. This paper explores and answers the following questions: What is big data? What are the basic methods for representing, managing and analyzing big data? What is the relationship between big data and knowledge? Can we find a mapping from big data into knowledge space? What kind of infrastructure is required to support not only big data management and analysis but also knowledge discovery, sharing and management? What is the relationship between big data and science paradigm? What is the nature and fundamental challenge of big data computing? A multi-dimensional perspective is presented toward a methodology of big data computing.Comment: 59 page

    The Transition to...Open Access

    Get PDF
    This report describes and draws conclusions from the transition of the Association for Learning Technology’s journal Research in Learning Technology from toll-access to Open Access, and from being published by one of the "big five" commercial publishers to being published by a specialist Open Access publisher. The focus of the report is on what happened in the run-up to and after the transition, rather than on the process of deciding to switch between publishing models, which is covered in in ALT's 2011 report "Journal tendering for societies: a brief guide" - http://repository.alt.ac.uk/887/

    Integrating R and Hadoop for Big Data Analysis

    Get PDF
    Analyzing and working with big data could be very diffi cult using classical means like relational database management systems or desktop software packages for statistics and visualization. Instead, big data requires large clusters with hundreds or even thousands of computing nodes. Offi cial statistics is increasingly considering big data for deriving new statistics because big data sources could produce more relevant and timely statistics than traditional sources. One of the software tools successfully and wide spread used for storage and processing of big data sets on clusters of commodity hardware is Hadoop. Hadoop framework contains libraries, a distributed fi le-system (HDFS), a resource-management platform and implements a version of the MapReduce programming model for large scale data processing. In this paper we investigate the possibilities of integrating Hadoop with R which is a popular software used for statistical computing and data visualization. We present three ways of integrating them: R with Streaming, Rhipe and RHadoop and we emphasize the advantages and disadvantages of each solution.Comment: Romanian Statistical Review no. 2 / 201

    Accelerating Large-Scale Data Analysis by Offloading to High-Performance Computing Libraries using Alchemist

    Full text link
    Apache Spark is a popular system aimed at the analysis of large data sets, but recent studies have shown that certain computations---in particular, many linear algebra computations that are the basis for solving common machine learning problems---are significantly slower in Spark than when done using libraries written in a high-performance computing framework such as the Message-Passing Interface (MPI). To remedy this, we introduce Alchemist, a system designed to call MPI-based libraries from Apache Spark. Using Alchemist with Spark helps accelerate linear algebra, machine learning, and related computations, while still retaining the benefits of working within the Spark environment. We discuss the motivation behind the development of Alchemist, and we provide a brief overview of its design and implementation. We also compare the performances of pure Spark implementations with those of Spark implementations that leverage MPI-based codes via Alchemist. To do so, we use data science case studies: a large-scale application of the conjugate gradient method to solve very large linear systems arising in a speech classification problem, where we see an improvement of an order of magnitude; and the truncated singular value decomposition (SVD) of a 400GB three-dimensional ocean temperature data set, where we see a speedup of up to 7.9x. We also illustrate that the truncated SVD computation is easily scalable to terabyte-sized data by applying it to data sets of sizes up to 17.6TB.Comment: Accepted for publication in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, London, UK, 201

    Digital communities: context for leading learning into the future?

    Get PDF
    In 2011, a robust, on-campus, three-element Community of Practice model consisting of growing community, sharing of practice and building domain knowledge was piloted in a digital learning environment. An interim evaluation of the pilot study revealed that the three-element framework, when used in a digital environment, required a fourth element. This element, which appears to happen incidentally in the face-to-face context, is that of reflecting, reporting and revising. This paper outlines the extension of the pilot study to the national tertiary education context in order to explore the implications for the design, leadership roles, and selection of appropriate technologies to support and sustain digital communities using the four-element model
    • 

    corecore