116 research outputs found

    Pemanfaatan Raspberry Pi 3 dan Hadoop Sebagai Pembatas Penyimpanan Online Berbasiskan Website

    Get PDF
    Jaringan komputer saat ini, merupakan sarana untuk mengakses data di setiap ruang dan waktu melalui perangkat komputer ataupun perangkat mobile. Penyimpanan data yang terhubung ke jaringan internet diharapkan dapat memberikan layanan yang optimal ketika terjadinya perpindahan data, proses upload ataupun proses download data ke penyimpanan data melaui perangkat yang digunakan oleh user. Ketika user semakin banyak, hal ini dapat menimbulkan kebanjiran data yang akan disimpan dalam perangkat penyimpanan. Selain itu, semakin banyak user dengan data-data besar yang dimilikinya, hal ini dapat menimbulkan perangkat penyimpanan utama digital atau harddisc kapasitasnya penuh. Dengan demikian, dibutuhkan penyimpanan data berupa storage server yang digunakan sebagai cadangan penyimpanan data pada jaringan komputer. Penelitian ini merancang teknologi storage server dengan menggunakan perangkat lunak Apache Hadoop Distributed File System (HDFS) dan Java JDK yang diimplementasikan pada komputer Raspberry Pi sebagai komputer server, dengan menggunakan pembatasan kuota kapasitas penyimpanan. Rancangan storage server sebagai penyimpanan cadangan ini dapat digunakan bersama-sama namun dengan pembatan jumlah kapasitas dan directory. Pembatasan kuota kapasitas yang diberikan sebesar 10GB untuk masing-masing directory user. Hasil dari penelitian ini, storage server dapat digunakan secara bersama-sama oleh user, dimana masing-masing user dapat menyimpan data sebesar 10GB pada directory yang dimilikiny

    Commodity single board computer clusters and their applications

    Get PDF
    © 2018 Current commodity Single Board Computers (SBCs) are sufficiently powerful to run mainstream operating systems and workloads. Many of these boards may be linked together, to create small, low-cost clusters that replicate some features of large data center clusters. The Raspberry Pi Foundation produces a series of SBCs with a price/performance ratio that makes SBC clusters viable, perhaps even expendable. These clusters are an enabler for Edge/Fog Compute, where processing is pushed out towards data sources, reducing bandwidth requirements and decentralizing the architecture. In this paper we investigate use cases driving the growth of SBC clusters, we examine the trends in future hardware developments, and discuss the potential of SBC clusters as a disruptive technology. Compared to traditional clusters, SBC clusters have a reduced footprint, are low-cost, and have low power requirements. This enables different models of deployment—particularly outside traditional data center environments. We discuss the applicability of existing software and management infrastructure to support exotic deployment scenarios and anticipate the next generation of SBC. We conclude that the SBC cluster is a new and distinct computational deployment paradigm, which is applicable to a wider range of scenarios than current clusters. It facilitates Internet of Things and Smart City systems and is potentially a game changer in pushing application logic out towards the network edge

    Greedy nominator heuristic: virtual function placement on fog resources

    Get PDF
    Fog computing is an intermediate infrastructure between edge devices (e.g., Internet of Things) and cloud systems that is used to reduce latency in real-time applications. An application can be composed of a collection of virtual functions, between which dependency constraints can be captured in a service function chain (SFC). Virtual functions within an SFC can be executed at different geo-distributed locations. However, virtual functions are prone to failure and often do not complete within a deadline. This results in function reallocation to other nodes within the infrastructure; causing delays, potential data loss during function migration, and increased costs. We proposed Greedy Nominator Heuristic (GNH) to address these issues. GNH is based on redundant deployment and failure tracking of virtual functions. GNH places replicas of each function at multiple locations—taking account of expected completion time, failure risk, and cost. We make use of a MapReduce-based mechanism, where Mappers find suitable locations in parallel, and a Reducer then ranks these locations. Our results show that GNH reduces latency by up to 68%, and is more cost effective than other approaches which rely on state-of-the-art optimization algorithms to allocate replicas

    Bibliographic and Text Analysis of Research on Implementation of the Internet of Things to Support Education

    Get PDF
    The Internet of Things (IoT) has pervaded practically all aspects of our lives. In this exploratory study, we survey its applications in the field of education. It is evident that technology in general, and, in particular IoT, has been increasingly altering the educational landscape. The goal of this paper is to review the academic literature on IoT applications in education to provide an understanding of the transformation that is underway. Using topic modeling and keyword co-occurrence analysis techniques, we identified five dominant clusters of research. Our findings demonstrate that IoT research in education has mainly focused on the technical aspects; however, the social aspects remain largely unexplored. In addition to providing an overview of IoT research on education, this paper offers suggestions for future research

    Low-latency, query-driven analytics over voluminous multidimensional, spatiotemporal datasets

    Get PDF
    2017 Summer.Includes bibliographical references.Ubiquitous data collection from sources such as remote sensing equipment, networked observational devices, location-based services, and sales tracking has led to the accumulation of voluminous datasets; IDC projects that by 2020 we will generate 40 zettabytes of data per year, while Gartner and ABI estimate 20-35 billion new devices will be connected to the Internet in the same time frame. The storage and processing requirements of these datasets far exceed the capabilities of modern computing hardware, which has led to the development of distributed storage frameworks that can scale out by assimilating more computing resources as necessary. While challenging in its own right, storing and managing voluminous datasets is only the precursor to a broader field of study: extracting knowledge, insights, and relationships from the underlying datasets. The basic building block of this knowledge discovery process is analytic queries, encompassing both query instrumentation and evaluation. This dissertation is centered around query-driven exploratory and predictive analytics over voluminous, multidimensional datasets. Both of these types of analysis represent a higher-level abstraction over classical query models; rather than indexing every discrete value for subsequent retrieval, our framework autonomously learns the relationships and interactions between dimensions in the dataset (including time series and geospatial aspects), and makes the information readily available to users. This functionality includes statistical synopses, correlation analysis, hypothesis testing, probabilistic structures, and predictive models that not only enable the discovery of nuanced relationships between dimensions, but also allow future events and trends to be predicted. This requires specialized data structures and partitioning algorithms, along with adaptive reductions in the search space and management of the inherent trade-off between timeliness and accuracy. The algorithms presented in this dissertation were evaluated empirically on real-world geospatial time-series datasets in a production environment, and are broadly applicable across other storage frameworks

    A smartwater metering deployment based on the fog computing paradigm

    Get PDF
    In this paper, we look into smart water metering infrastructures that enable continuous, on-demand and bidirectional data exchange between metering devices, water flow equipment, utilities and end-users. We focus on the design, development and deployment of such infrastructures as part of larger, smart city, infrastructures. Until now, such critical smart city infrastructures have been developed following a cloud-centric paradigm where all the data are collected and processed centrally using cloud services to create real business value. Cloud-centric approaches need to address several performance issues at all levels of the network, as massive metering datasets are transferred to distant machine clouds while respecting issues like security and data privacy. Our solution uses the fog computing paradigm to provide a system where the computational resources already available throughout the network infrastructure are utilized to facilitate greatly the analysis of fine-grained water consumption data collected by the smart meters, thus significantly reducing the overall load to network and cloud resources. Details of the system's design are presented along with a pilot deployment in a real-world environment. The performance of the system is evaluated in terms of network utilization and computational performance. Our findings indicate that the fog computing paradigm can be applied to a smart grid deployment to reduce effectively the data volume exchanged between the different layers of the architecture and provide better overall computational, security and privacy capabilities to the system

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Cloud media video encoding:review and challenges

    Get PDF
    In recent years, Internet traffic patterns have been changing. Most of the traffic demand by end users is multimedia, in particular, video streaming accounts for over 53%. This demand has led to improved network infrastructures and computing architectures to meet the challenges of delivering these multimedia services while maintaining an adequate quality of experience. Focusing on the preparation and adequacy of multimedia content for broadcasting, Cloud and Edge Computing infrastructures have been and will be crucial to offer high and ultra-high definition multimedia content in live, real-time, or video-on-demand scenarios. For these reasons, this review paper presents a detailed study of research papers related to encoding and transcoding techniques in cloud computing environments. It begins by discussing the evolution of streaming and the importance of the encoding process, with a focus on the latest streaming methods and codecs. Then, it examines the role of cloud systems in multimedia environments and provides details on the cloud infrastructure for media scenarios. After doing a systematic literature review, we have been able to find 49 valid papers that meet the requirements specified in the research questions. Each paper has been analyzed and classified according to several criteria, besides to inspect their relevance. To conclude this review, we have identified and elaborated on several challenges and open research issues associated with the development of video codecs optimized for diverse factors within both cloud and edge architectures. Additionally, we have discussed emerging challenges in designing new cloud/edge architectures aimed at more efficient delivery of media traffic. This involves investigating ways to improve the overall performance, reliability, and resource utilization of architectures that support the transmission of multimedia content over both cloud and edge computing environments ensuring a good quality of experience for the final user
    • …
    corecore