39,722 research outputs found
Bank Networks from Text: Interrelations, Centrality and Determinants
In the wake of the still ongoing global financial crisis, bank
interdependencies have come into focus in trying to assess linkages among banks
and systemic risk. To date, such analysis has largely been based on numerical
data. By contrast, this study attempts to gain further insight into bank
interconnections by tapping into financial discourse. We present a
text-to-network process, which has its basis in co-occurrences of bank names
and can be analyzed quantitatively and visualized. To quantify bank importance,
we propose an information centrality measure to rank and assess trends of bank
centrality in discussion. For qualitative assessment of bank networks, we put
forward a visual, interactive interface for better illustrating network
structures. We illustrate the text-based approach on European Large and Complex
Banking Groups (LCBGs) during the ongoing financial crisis by quantifying bank
interrelations and centrality from discussion in 3M news articles, spanning
2007Q1 to 2014Q3.Comment: Quantitative Finance, forthcoming in 201
Software tools for conducting bibliometric analysis in science: An up-to-date review
Bibliometrics has become an essential tool for assessing and analyzing the output of scientists, cooperation between
universities, the effect of state-owned science funding on national research and development performance and educational
efficiency, among other applications. Therefore, professionals and scientists need a range of theoretical and practical
tools to measure experimental data. This review aims to provide an up-to-date review of the various tools available
for conducting bibliometric and scientometric analyses, including the sources of data acquisition, performance analysis
and visualization tools. The included tools were divided into three categories: general bibliometric and performance
analysis, science mapping analysis, and libraries; a description of all of them is provided. A comparative analysis of the
database sources support, pre-processing capabilities, analysis and visualization options were also provided in order to
facilitate its understanding. Although there are numerous bibliometric databases to obtain data for bibliometric and
scientometric analysis, they have been developed for a different purpose. The number of exportable records is between
500 and 50,000 and the coverage of the different science fields is unequal in each database. Concerning the analyzed
tools, Bibliometrix contains the more extensive set of techniques and suitable for practitioners through Biblioshiny.
VOSviewer has a fantastic visualization and is capable of loading and exporting information from many sources. SciMAT
is the tool with a powerful pre-processing and export capability. In views of the variability of features, the users need to
decide the desired analysis output and chose the option that better fits into their aims
Concept discovery innovations in law enforcement: a perspective.
In the past decades, the amount of information available to law enforcement agencies has increased significantly. Most of this information is in textual form, however analyses have mainly focused on the structured data. In this paper, we give an overview of the concept discovery projects at the Amsterdam-Amstelland police where Formal Concept Analysis (FCA) is being used as text mining instrument. FCA is combined with statistical techniques such as Hidden Markov Models (HMM) and Emergent Self Organizing Maps (ESOM). The combination of this concept discovery and refinement technique with statistical techniques for analyzing high-dimensional data not only resulted in new insights but often in actual improvements of the investigation procedures.Formal concept analysis; Intelligence led policing; Knowledge discovery;
Public survey instruments for business administration using social network analysis and big data
Purpose: The subject matter of this research is closely intertwined with the scientific discussion about the necessity of developing and implementing practice-oriented means of measuring social well-being taking into account the intensity of contacts between individuals. The aim of the research is to test the toolkit for analyzing social networks and to develop a research algorithm to identify sources of consolidation of public opinion and key agents of influence. The research methodology is based on postulates of sociology, graph theory, social network analysis and cluster analysis. Design/Methodology/Approach: The basis for the empirical research was provided by the data representing the reflection of social media users on the existing image of Russia and its activities in the Arctic, chosen as a model case. Findings: The algorithm allows to estimate the density and intensity of connections between actors, to trace the main channels of formation of public opinion and key agents of influence, to identify implicit patterns and trends, to relate information flows and events with current information causes and news stories for the subsequent formation of a "cleansed" image of the object under study and the key actors with whom this object is associated. Practical Implications: The work contributes to filling the existing gap in the scientific literature, caused by insufficient elaboration of the issues of applying the social network analysis to solve sociological problems. Originality/Value: The work contributes to filling the existing gap in the scientific literature formed as a result of insufficient development of practical issues of using analysis of social networks to solve sociological problems.peer-reviewe
Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure
Big data research has attracted great attention in science, technology,
industry and society. It is developing with the evolving scientific paradigm,
the fourth industrial revolution, and the transformational innovation of
technologies. However, its nature and fundamental challenge have not been
recognized, and its own methodology has not been formed. This paper explores
and answers the following questions: What is big data? What are the basic
methods for representing, managing and analyzing big data? What is the
relationship between big data and knowledge? Can we find a mapping from big
data into knowledge space? What kind of infrastructure is required to support
not only big data management and analysis but also knowledge discovery, sharing
and management? What is the relationship between big data and science paradigm?
What is the nature and fundamental challenge of big data computing? A
multi-dimensional perspective is presented toward a methodology of big data
computing.Comment: 59 page
Complex networks analysis in socioeconomic models
This chapter aims at reviewing complex networks models and methods that were
either developed for or applied to socioeconomic issues, and pertinent to the
theme of New Economic Geography. After an introduction to the foundations of
the field of complex networks, the present summary adds insights on the
statistical mechanical approach, and on the most relevant computational aspects
for the treatment of these systems. As the most frequently used model for
interacting agent-based systems, a brief description of the statistical
mechanics of the classical Ising model on regular lattices, together with
recent extensions of the same model on small-world Watts-Strogatz and
scale-free Albert-Barabasi complex networks is included. Other sections of the
chapter are devoted to applications of complex networks to economics, finance,
spreading of innovations, and regional trade and developments. The chapter also
reviews results involving applications of complex networks to other relevant
socioeconomic issues, including results for opinion and citation networks.
Finally, some avenues for future research are introduced before summarizing the
main conclusions of the chapter.Comment: 39 pages, 185 references, (not final version of) a chapter prepared
for Complexity and Geographical Economics - Topics and Tools, P.
Commendatore, S.S. Kayam and I. Kubin Eds. (Springer, to be published
Scale-adjusted metrics for predicting the evolution of urban indicators and quantifying the performance of cities
More than a half of world population is now living in cities and this number
is expected to be two-thirds by 2050. Fostered by the relevancy of a scientific
characterization of cities and for the availability of an unprecedented amount
of data, academics have recently immersed in this topic and one of the most
striking and universal finding was the discovery of robust allometric scaling
laws between several urban indicators and the population size. Despite that,
most governmental reports and several academic works still ignore these
nonlinearities by often analyzing the raw or the per capita value of urban
indicators, a practice that actually makes the urban metrics biased towards
small or large cities depending on whether we have super or sublinear
allometries. By following the ideas of Bettencourt et al., we account for this
bias by evaluating the difference between the actual value of an urban
indicator and the value expected by the allometry with the population size. We
show that this scale-adjusted metric provides a more appropriate/informative
summary of the evolution of urban indicators and reveals patterns that do not
appear in the evolution of per capita values of indicators obtained from
Brazilian cities. We also show that these scale-adjusted metrics are strongly
correlated with their past values by a linear correspondence and that they also
display crosscorrelations among themselves. Simple linear models account for
31%-97% of the observed variance in data and correctly reproduce the average of
the scale-adjusted metric when grouping the cities in above and below the
allometric laws. We further employ these models to forecast future values of
urban indicators and, by visualizing the predicted changes, we verify the
emergence of spatial clusters characterized by regions of the Brazilian
territory where we expect an increase or a decrease in the values of urban
indicators.Comment: Accepted for publication in PLoS ON
Extraction and Analysis of Facebook Friendship Relations
Online Social Networks (OSNs) are a unique Web and social phenomenon, affecting tastes and behaviors of their users and helping them to maintain/create friendships. It is interesting to analyze the growth and evolution of Online Social Networks both from the point of view of marketing and other of new services and from a scientific viewpoint, since their structure and evolution may share similarities with real-life social networks. In social sciences, several techniques for analyzing (online) social networks have been developed, to evaluate quantitative properties (e.g., defining metrics and measures of structural characteristics of the networks) or qualitative aspects (e.g., studying the attachment model for the network evolution, the binary trust relationships, and the link prediction problem).\ud
However, OSN analysis poses novel challenges both to Computer and Social scientists. We present our long-term research effort in analyzing Facebook, the largest and arguably most successful OSN today: it gathers more than 500 million users. Access to data about Facebook users and their friendship relations, is restricted; thus, we acquired the necessary information directly from the front-end of the Web site, in order to reconstruct a sub-graph representing anonymous interconnections among a significant subset of users. We describe our ad-hoc, privacy-compliant crawler for Facebook data extraction. To minimize bias, we adopt two different graph mining techniques: breadth-first search (BFS) and rejection sampling. To analyze the structural properties of samples consisting of millions of nodes, we developed a specific tool for analyzing quantitative and qualitative properties of social networks, adopting and improving existing Social Network Analysis (SNA) techniques and algorithms
- …