2,991,547 research outputs found
Big Data Coordination Platform: Full Proposal 2017-2022
This proposal for a Big Data and ICT Platform therefore focuses on enhancing CGIAR and partner capacity to deliver big data management, analytics and ICT-focused solutions to CGIAR target geographies and communities. The ultimate goal of the platform is to harness the capabilities of Big Data to accelerate and enhance the impact of international agricultural research. It will support CGIAR’s mission by creating an enabling environment where data are expertly managed and used effectively to strengthen delivery on CGIAR SRF’s System Level Outcome (SLO) targets. Critical gaps were identified during the extensive scoping consultations with CGIAR researchers and partners (provided in Annex 8). The Platform will achieve this through ambitious partnerships with initiatives and organizations outside CGIAR, both upstream and downstream, public and private. It will focus on promoting CGIAR-wide collaboration across CRPs and Centers, in addition to developing new partnership models with big data leaders at the global level. As a result, CGIAR and partner capacity will be enhanced, external partnerships will be leveraged, and an institutional culture of collaborative data management and analytics will be established. Important international public goods such as new global and regional datasets will be developed, alongside new methods that support CGIAR to use the data revolution as an additional means of delivering on SLOs
Alliance for a Data Revolution: CGIAR Platform for Big Data in Agriculture 2017 Convention Report
On September 19-22, 2017 the Consultative Group for International Agricultural Research1 (CGIAR) gathered over 300 local and international researchers, non-profits, public and private sector actors for the first CGIAR Platform for Big Data in Agriculture Convention, hosted by the International Center for Tropical Agriculture (CIAT) in Palmira, Colombia. The Convention marked the programmatic launch of the Platform, which aims to enable the development sector to embrace data and other digital technology approaches to solve agricultural development problems faster, better and at greater scale.
The Platform works across the CGIAR network and CGIAR Research Programs (CRPs) and with the gamut of stakeholders in the agriculture sector as they grapple with creation, curation, and sharing data to enable new approaches to complex development challenges.
The Platform is designed around three strategic pillars: Organize, Convene, and Inspire. The first aims to organize data so datasets are findable, accessible, and interoperable so they can be used increasingly in big data analytics. In addition, this pillar will develop open digital infrastructures for the sector that support the CGIAR’s work and enable new partnerships and innovations. The aim to convene analysts, researchers and public, private and non-profit actors in the agriculture sector will build new partnerships that both shape and fully leverage digital technologies in support of global agricultural development. The final pillar is to inspire these actors to push the limits of research and innovation to generate new data-driven approaches that solve real world development problems faster, cheaper, and more efficiently
BDGS: A Scalable Big Data Generator Suite in Big Data Benchmarking
Data generation is a key issue in big data benchmarking that aims to generate
application-specific data sets to meet the 4V requirements of big data.
Specifically, big data generators need to generate scalable data (Volume) of
different types (Variety) under controllable generation rates (Velocity) while
keeping the important characteristics of raw data (Veracity). This gives rise
to various new challenges about how we design generators efficiently and
successfully. To date, most existing techniques can only generate limited types
of data and support specific big data systems such as Hadoop. Hence we develop
a tool, called Big Data Generator Suite (BDGS), to efficiently generate
scalable big data while employing data models derived from real data to
preserve data veracity. The effectiveness of BDGS is demonstrated by developing
six data generators covering three representative data types (structured,
semi-structured and unstructured) and three data sources (text, graph, and
table data)
Big Data Visualization Tools
Data visualization is the presentation of data in a pictorial or graphical
format, and a data visualization tool is the software that generates this
presentation. Data visualization provides users with intuitive means to
interactively explore and analyze data, enabling them to effectively identify
interesting patterns, infer correlations and causalities, and supports
sense-making activities.Comment: This article appears in Encyclopedia of Big Data Technologies,
Springer, 201
Big Data Dimensional Analysis
The ability to collect and analyze large amounts of data is a growing problem
within the scientific community. The growing gap between data and users calls
for innovative tools that address the challenges faced by big data volume,
velocity and variety. One of the main challenges associated with big data
variety is automatically understanding the underlying structures and patterns
of the data. Such an understanding is required as a pre-requisite to the
application of advanced analytics to the data. Further, big data sets often
contain anomalies and errors that are difficult to know a priori. Current
approaches to understanding data structure are drawn from the traditional
database ontology design. These approaches are effective, but often require too
much human involvement to be effective for the volume, velocity and variety of
data encountered by big data systems. Dimensional Data Analysis (DDA) is a
proposed technique that allows big data analysts to quickly understand the
overall structure of a big dataset, determine anomalies. DDA exploits
structures that exist in a wide class of data to quickly determine the nature
of the data and its statical anomalies. DDA leverages existing schemas that are
employed in big data databases today. This paper presents DDA, applies it to a
number of data sets, and measures its performance. The overhead of DDA is low
and can be applied to existing big data systems without greatly impacting their
computing requirements.Comment: From IEEE HPEC 201
- …
