40,452 research outputs found
Origins of Modern Data Analysis Linked to the Beginnings and Early Development of Computer Science and Information Engineering
The history of data analysis that is addressed here is underpinned by two
themes, -- those of tabular data analysis, and the analysis of collected
heterogeneous data. "Exploratory data analysis" is taken as the heuristic
approach that begins with data and information and seeks underlying explanation
for what is observed or measured. I also cover some of the evolving context of
research and applications, including scholarly publishing, technology transfer
and the economic relationship of the university to society.Comment: 26 page
Harnessing data flow and modelling potentials for sustainable development
Tackling some of the global challenges relating to health, poverty, business and the environment is known to be heavily dependent on the flow and utilisation of data. However, while enhancements in data generation, storage, modelling, dissemination and the related integration of global economies and societies are fast transforming the way we live and interact, the resulting dynamic, globalised and information society remains digitally divided. On the African continent, in particular, the division has resulted into a gap between knowledge generation and its transformation into tangible products and services which Kirsop and Chan (2005) attribute to a broken information flow. This paper proposes some fundamental approaches for a sustainable transformation of data into knowledge for the purpose of improving the peoples' quality of life. Its main strategy is based on a generic data sharing model providing access to data utilising and generating entities in a multi disciplinary environment. It highlights the great potentials in using unsupervised and supervised modelling in tackling the typically predictive-in-nature challenges we face. Using both simulated and real data, the paper demonstrates how some of the key parameters may be generated and embedded in models to enhance their predictive power and reliability.
Its main outcomes include a proposed implementation framework setting the scene for the creation of decision support systems capable of addressing the key issues in society. It is expected that a sustainable data flow will forge synergies between the private sector, academic and research institutions within and between countries. It is also expected that the paper's findings will help in the design and development of knowledge extraction from data in the wake of cloud computing and, hence, contribute towards the improvement in the peoples' overall quality of life. To void running high implementation costs, selected open source tools are recommended for developing and sustaining the system.
Key words: Cloud Computing, Data Mining, Digital Divide, Globalisation, Grid Computing, Information Society, KTP, Predictive Modelling and STI
Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure
Big data research has attracted great attention in science, technology,
industry and society. It is developing with the evolving scientific paradigm,
the fourth industrial revolution, and the transformational innovation of
technologies. However, its nature and fundamental challenge have not been
recognized, and its own methodology has not been formed. This paper explores
and answers the following questions: What is big data? What are the basic
methods for representing, managing and analyzing big data? What is the
relationship between big data and knowledge? Can we find a mapping from big
data into knowledge space? What kind of infrastructure is required to support
not only big data management and analysis but also knowledge discovery, sharing
and management? What is the relationship between big data and science paradigm?
What is the nature and fundamental challenge of big data computing? A
multi-dimensional perspective is presented toward a methodology of big data
computing.Comment: 59 page
Race: the difference that makes a difference
During the last two decades, critical enquiry into the nature of race has begun to enter the philosophical mainstream. The same period has also witnessed the emergence of an increasingly visible discourse about the nature of information within a diverse range of popular and academic settings. What is yet to emerge, however, is engagement at the interface of the two disciplines – critical race theory and the philosophy of information. In this paper, I shall attempt to contribute towards the emergence of such a field of enquiry by using a reflexive hermeneutic (or interpretative) approach to analyze the concept of race from an information-theoretical perspective, while reflexively analyzing the concept of information from a critical race-theoretical perspective. In order to facilitate a more concrete enquiry, the concept of information formulated by cyberneticist Gregory Bateson and the concept of race formulated by philosopher Charles W Mills will be placed at the centre of analysis. Crucially, both concepts can be shown to have a connection to the critical philosophy of Immanuel Kant, thereby justifying their selection as topics of examination on critical reflexive hermeneutic grounds
The computational turn: thinking about the digital humanities
No description supplie
Access to recorded interviews: A research agenda
Recorded interviews form a rich basis for scholarly inquiry. Examples include oral histories, community memory projects, and interviews conducted for broadcast media. Emerging technologies offer the potential to radically transform the way in which recorded interviews are made accessible, but this vision will demand substantial investments from a broad range of research communities. This article reviews the present state of practice for making recorded interviews available and the state-of-the-art for key component technologies. A large number of important research issues are identified, and from that set of issues, a coherent research agenda is proposed
Journal Maps, Interactive Overlays, and the Measurement of Interdisciplinarity on the Basis of Scopus Data (1996-2012)
Using Scopus data, we construct a global map of science based on aggregated
journal-journal citations from 1996-2012 (N of journals = 20,554). This base
map enables users to overlay downloads from Scopus interactively. Using a
single year (e.g., 2012), results can be compared with mappings based on the
Journal Citation Reports at the Web-of-Science (N = 10,936). The Scopus maps
are more detailed at both the local and global levels because of their greater
coverage, including, for example, the arts and humanities. The base maps can be
interactively overlaid with journal distributions in sets downloaded from
Scopus, for example, for the purpose of portfolio analysis. Rao-Stirling
diversity can be used as a measure of interdisciplinarity in the sets under
study. Maps at the global and the local level, however, can be very different
because of the different levels of aggregation involved. Two journals, for
example, can both belong to the humanities in the global map, but participate
in different specialty structures locally. The base map and interactive tools
are available online (with instructions) at
http://www.leydesdorff.net/scopus_ovl.Comment: accepted for publication in the Journal of the Association for
Information Science and Technology (JASIST
- …