20,399 research outputs found

    Wireless Communications in the Era of Big Data

    Full text link
    The rapidly growing wave of wireless data service is pushing against the boundary of our communication network's processing power. The pervasive and exponentially increasing data traffic present imminent challenges to all the aspects of the wireless system design, such as spectrum efficiency, computing capabilities and fronthaul/backhaul link capacity. In this article, we discuss the challenges and opportunities in the design of scalable wireless systems to embrace such a "bigdata" era. On one hand, we review the state-of-the-art networking architectures and signal processing techniques adaptable for managing the bigdata traffic in wireless networks. On the other hand, instead of viewing mobile bigdata as a unwanted burden, we introduce methods to capitalize from the vast data traffic, for building a bigdata-aware wireless network with better wireless service quality and new mobile applications. We highlight several promising future research directions for wireless communications in the mobile bigdata era.Comment: This article is accepted and to appear in IEEE Communications Magazin

    Big Data Gears:The Data Accelator For Large Data In Mobile Networks

    Get PDF
    BigData management is merely concerned with the large-scale data, generated from the cloud, mobile computing, media accessing and many more. Based on the research survey, such as BigCache, which emphasizes on efficiently serve the data to the end-user by provisioning the large cached data at local site, thereby to benefit effective bandwidth utilization, low server load and rapid data access at low cost. To extend this work, we would like to incorporate the notion of data gears in BigData management, to accelerate the large data access in mobile networks, which consist of the small size portable devices with mobility and low resources. These lead to the many challenges such as small cache, bandwidth management, power consumption, data sharing, secured access and so no. We mainly focus on the data access and bandwidth level in BigData through providing 'Gear Architecture' for the flexibility, portability and interchangeability of the various components in BigData over mobile networks. We will target to improve the utilization through proposing data access techniques in extended ranges using BigData analytic as a Gear

    Comparative Study of Various Image Processing Tools with BigData Images in Parallel Environment

    Get PDF
    BigData refers to the collection of various kinds of data in a large amount. This data needs to be process through some tools as it can have some useful information. In order to process this data, Hadoop came in existence. It processes the data in parallel distributed environment. BigData contains various forms of data such as text, images, videos etc. In this paper we are going to discuss about image data. We need to perform image processing to process these BigData images. As BigData is very huge so the number of images to be processed is also very large. Image processing can be performed in parallel environment in order to speed up this process. There are various tools to process the images in parallel environment, such as: HIPI, OpenCV, CUDA, MIPr etc. In this paper we have performed a comparative study on these tools

    Cost management model in bigdata projects for Startups based on PMI

    Get PDF
    El presente artículo propone un modelo de gestión de costos en proyectos BigData para startups utilizando un enfoque estandarizado que facilite la implementación de proyectos con estas tecnologías mediante el marco de trabajo del PMI(Project Management Institute) , ayudando en la rapidez y la optimización de los procesos de determinación de costos y posterior control durante la ejecución del proyectos BigData. El documento presenta los principios del BigData para comprender el tipo de tecnologías requeridas en un proyecto de este tipo y la forma de gestionar el costo, posteriormente se presenta la metodología PMI para la gestión de costos , identificando sus principales componentes y proponiendo una metodología que siga los lineamientos estándar para su aplicación a los elementos fundamentales del proceso de costeo para desarrollar un proyecto BigData con las herramientas actuales en la nube que permiten tener acceso a esta tecnología en una compañía emergente o startup. A través del artículo presentado fue posible proponer un modelo estándar para la gestión de costos en una organización de pequeño tamaño y emergente que facilita la conformación de modelos de gestión de costos específicos, Identificando los factores fundamentales y posibilitando la reducción de tiempos en la implementación y funcionando como una guía según sea aplicado en la implementación de proyectos de este tipo.This paper proposes a model of cost management in bigdata projects for SMEs using a standardized approach to facilitate the implementation of projects with these technologies through the framework of the PMI, helping speed and optimization processes costing and subsequent control during the execution of projects bigdata. The paper presents the principles of bigdata to understand the type of technologies required in a project like this, then the PMI methodology for cost management is presented, identifying its main components and proposing a methodology to follow standard guidelines for application the essential elements of costing process to develop a project bigdata current cloud tools that allow access to this technology in an STARTUP. Through article presented was possible to propose a standard model for cost management in an organization of medium and small size that facilitates the formation of management models specific costs, identifying key factors and possible to reduce times and worked as a guide as is applied in the implementation of such projects

    An Experiment on Bare-Metal BigData Provisioning

    Full text link
    Many BigData customers use on-demand platforms in the cloud, where they can get a dedicated virtual cluster in a couple of minutes and pay only for the time they use. Increasingly, there is a demand for bare-metal bigdata solutions for applications that cannot tolerate the unpredictability and performance degradation of virtualized systems. Existing bare-metal solutions can introduce delays of 10s of minutes to provision a cluster by installing operating systems and applications on the local disks of servers. This has motivated recent research developing sophisticated mechanisms to optimize this installation. These approaches assume that using network mounted boot disks incur unacceptable run-time overhead. Our analysis suggest that while this assumption is true for application data, it is incorrect for operating systems and applications, and network mounting the boot disk and applications result in negligible run-time impact while leading to faster provisioning time.This research was supported in part by the MassTech Collaborative Research Matching Grant Program, NSF awards 1347525 and 1414119 and several commercial partners of the Massachusetts Open Cloud who may be found at http://www.massopencloud.or

    Hadoop Distributed File System (HDFS) and Various Facts Related to Big Data

    Get PDF
    The term big data, particularly when utilized by vendors, may allude to the innovation that an association requires to deal with the a lot of data and storage facilities. The term bigdata is accepted to have started with Web search organizations who expected to query big appropriated distributed aggregations of loosely-structured data. Bigdata is high-volume, high- velocity and high- variety data resources that request practical, inventive types of data handling for upgraded knowledge and decision making. Hadoop, used to process unstructured and semistructuredbigdata, utilizes the map-reduce worldview to find every applicable datum at that point select just the data straightforwardly noting the query. NoSQL, MongoDB, and TerraStore process organized bigdata. NoSQL data is described by being fundamentally accessible, delicate state (variable), and in the long run predictable. MongoDB and TerraStore are both NoSQL-related items utilized for report arranged applications.The approach of the period of bigdata presents openings and difficulties for organizations. Already inaccessible types of data would now be able to be spared, recovered, and prepared. Be that as it may, changes to equipment, programming, and data preparing systems are important to utilize this new worldview. Bigdata presents opportunities and difficulties for organizations. Data analytics will displace the utilization of just organized queries of relational database management system. Advantages of large data use to business officials incorporate upgraded data sharing through straightforwardness, improved execution however investigation, expanded market division, increased decision support through advanced analytics, and more prominent capacity to enhance items, services and business models. Business owners need to pursue inclines in bigdata cautiously to make the decision that fits their businesses

    Prototyping Workload-based Resource Coordination for Cloud-leveraged HPC/BigData Cluster Sharing

    Get PDF
    Recently high-performance computing (HPC) and BigData workloads are increasingly running over cloud-leveraged shared resources, meanwhile traditionally dedicated clusters have been configured only for specificworkloads. That is, in order to improve resource utilization efficiency, shared resource clusters are required to support both HPC and BigData workloads. Thus, in this paper, we discuss about a prototyping effort to enable workload-based resource coordination for cloud-leveraged shared HPC/BigData cluster.By taking OpenStack cloud-leveraged shared cluster as an example, we demonstrate the possibility of workload-based bare-metal cluster reconfiguration with interchangeable cluster provisioning and associated monitoring support
    corecore