1,283 research outputs found

    An Improved Algorithm for Faster Multi Keyword Search in Structured Organization

    Get PDF
    Searching is the major concern in the database operation. For accessing the information from database it is always required to perform the search. If the search process is efficient then time taken the get the required information will also be less. In this scenario, contain the information organized in the structured data for the big organization like any automobile industry maintaining information regarding its department. Now if perform the keyword based query, then on based of the keyword the queries will be formed. So, in order to reduce the time involved in the formation of the queries from the table have suggested the use of the associative mapping table in the search mechanism which will reduce the time involved in the process. The main aim this work is save the CPU time and efficient utilization of CPU to solve the purpose of the green computing

    BIG DATA ANALYTICS - AN OVERVIEW

    Get PDF
       Big Data Analytics has been in advance more attention recently since researchers in business and academic world are trying to successfully mine and use all possible knowledge from the vast amount of data generated and obtained. Demanding a paradigm shift in the storage, processing and analysis of Big Data, traditional data analysis methods stumble upon large amounts of data in a short period of time. Because of its importance, the U.S. Many agencies, including the government, have in recent years released large funds for research in Big Data and related fields. This gives a concise summary of investigate growth in various areas related to big data processing and analysis and terminate with a discussion of research guidelines in the similar areas. &nbsp

    Big data and humanitarian supply networks: Can Big Data give voice to the voiceless?

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright © 2013 IEEE.Billions of US dollars are spent each year in emergency aid to save lives and alleviate the suffering of those affected by disaster. This aid flows through a humanitarian system that consists of governments, different United Nations agencies, the Red Cross movement and myriad non-governmental organizations (NGOs). As scarcer resources, financial crisis and economic inter-dependencies continue to constrain humanitarian relief there is an increasing focus from donors and governments to assess the impact of humanitarian supply networks. Using commercial (`for-profit') supply networks as a benchmark; this paper exposes the counter-intuitive competition dynamic of humanitarian supply networks, which results in an open-loop system unable to calibrate supply with actual need and impact. In that light, the phenomenon of Big Data in the humanitarian field is discussed and an agenda for the `datafication' of the supply network set out as a means of closing the loop between supply, need and impact

    v. 83, issue 9, December 3, 2015

    Get PDF

    Deploying Large-Scale Datasets on-Demand in the Cloud: Treats and Tricks on Data Distribution

    Get PDF
    Public clouds have democratised the access to analytics for virtually any institution in the world. Virtual Machines (VMs) can be provisioned on demand, and be used to crunch data after uploading into the VMs. While this task is trivial for a few tens of VMs, it becomes increasingly complex and time consuming when the scale grows to hundreds or thousands of VMs crunching tens or hundreds of TB. Moreover, the elapsed time comes at a price: the cost of provisioning VMs in the cloud and keeping them waiting to load the data. In this paper we present a big data provisioning service that incorporates hierarchical and peer-to-peer data distribution techniques to speed-up data loading into the VMs used for data processing. The system dynamically mutates the sources of the data for the VMs to speed-up data loading. We tested this solution with 1000 VMs and 100 TB of data, reducing time by at least 30 % over current state of the art techniques. This dynamic topology mechanism is tightly coupled with classic declarative machine configuration techniques (the system takes a single high-level declarative configuration file and configures both software and data loading). Together, these two techniques simplify the deployment of big data in the cloud for end users who may not be experts in infrastructure management. Index Terms—Large-scale data transfer, flash crowd, big data, BitTorrent, p2p overlay, provisioning, big data distribution I

    Data Processing and the Envision

    Get PDF
    Data is being generated very rapidly due to increase in information in everyday life. Huge amount of data gets accumulated from various organizations that is difficult to analyze and exploit. Data created by an expanding number of sensors in the environment such as traffic cameras and satellites, internet activities on social networking sites, healthcare database, government database, sales data etc., are example of huge data. Processing, analyzing and communicating this data are a challenge. Online shopping websites get flooded with voluminous amount of sales data every day. Analyzing and visualizing this data for information retrieval is a difficult task. There are large number of information visualization techniques which have been developed over the last decade to support the exploration of large data sets. With today’s data management systems, it is only possible to view quite small portions of the data. If the data is presented textually, the amount of data which can be displayed is in the range of some 100 data items, but this is like a drop in the ocean when dealing with data sets containing millions of data items. Data is being generated very rapidly due to increase in information in everyday life. Huge amount of data gets accumulated from various organizations that is difficult to analyze and exploit. Data created by an expanding number of sensors in the environment such as traffic cameras and satellites, internet activities on social networking sites, healthcare database, government database, sales data etc., are example of huge data. Processing, analyzing and communicating this data are a challenge. Online shopping websites get flooded with voluminous amount of sales data every day. Analyzing and visualizing this data for information retrieval is a difficult task. Therefore, a system is required which will effectively analyze and visualize data. This paper focuses on a system which will visualize sales data which will help users in applying intelligence in business, revenue generation, and decision making, managing business operation and tracking progress of tasks. Effective and efficient data visualization is the key part of the discovery process. It is the intermediate between the human intuition and quantitative context of the data, thus an essential component of the scientific path from data into knowledge and understanding. Therefore, a system is required which will effectively analyze and visualize data. This paper focuses on a system which will visualize data which will help users in interactive data visualization applying in business, revenue generation, and decision making, managing business operation and tracking progress of tasks
    • …
    corecore