1,867 research outputs found
Analysis of a continuous-time model of structural balance
It is not uncommon for certain social networks to divide into two opposing
camps in response to stress. This happens, for example, in networks of
political parties during winner-takes-all elections, in networks of companies
competing to establish technical standards, and in networks of nations faced
with mounting threats of war. A simple model for these two-sided separations is
the dynamical system dX/dt = X^2 where X is a matrix of the friendliness or
unfriendliness between pairs of nodes in the network. Previous simulations
suggested that only two types of behavior were possible for this system: either
all relationships become friendly, or two hostile factions emerge. Here we
prove that for generic initial conditions, these are indeed the only possible
outcomes. Our analysis yields a closed-form expression for faction membership
as a function of the initial conditions, and implies that the initial amount of
friendliness in large social networks (started from random initial conditions)
determines whether they will end up in intractable conflict or global harmony.Comment: 12 pages, 2 figure
The egalitarian effect of search engines
Search engines have become key media for our scientific, economic, and social
activities by enabling people to access information on the Web in spite of its
size and complexity. On the down side, search engines bias the traffic of users
according to their page-ranking strategies, and some have argued that they
create a vicious cycle that amplifies the dominance of established and already
popular sites. We show that, contrary to these prior claims and our own
intuition, the use of search engines actually has an egalitarian effect. We
reconcile theoretical arguments with empirical evidence showing that the
combination of retrieval by search engines and search behavior by users
mitigates the attraction of popular pages, directing more traffic toward less
popular sites, even in comparison to what would be expected from users randomly
surfing the Web.Comment: 9 pages, 8 figures, 2 appendices. The final version of this e-print
has been published on the Proc. Natl. Acad. Sci. USA 103(34), 12684-12689
(2006), http://www.pnas.org/cgi/content/abstract/103/34/1268
Social Ranking Techniques for the Web
The proliferation of social media has the potential for changing the
structure and organization of the web. In the past, scientists have looked at
the web as a large connected component to understand how the topology of
hyperlinks correlates with the quality of information contained in the page and
they proposed techniques to rank information contained in web pages. We argue
that information from web pages and network data on social relationships can be
combined to create a personalized and socially connected web. In this paper, we
look at the web as a composition of two networks, one consisting of information
in web pages and the other of personal data shared on social media web sites.
Together, they allow us to analyze how social media tunnels the flow of
information from person to person and how to use the structure of the social
network to rank, deliver, and organize information specifically for each
individual user. We validate our social ranking concepts through a ranking
experiment conducted on web pages that users shared on Google Buzz and Twitter.Comment: 7 pages, ASONAM 201
Can Cascades be Predicted?
On many social networking web sites such as Facebook and Twitter, resharing
or reposting functionality allows users to share others' content with their own
friends or followers. As content is reshared from user to user, large cascades
of reshares can form. While a growing body of research has focused on analyzing
and characterizing such cascades, a recent, parallel line of work has argued
that the future trajectory of a cascade may be inherently unpredictable. In
this work, we develop a framework for addressing cascade prediction problems.
On a large sample of photo reshare cascades on Facebook, we find strong
performance in predicting whether a cascade will continue to grow in the
future. We find that the relative growth of a cascade becomes more predictable
as we observe more of its reshares, that temporal and structural features are
key predictors of cascade size, and that initially, breadth, rather than depth
in a cascade is a better indicator of larger cascades. This prediction
performance is robust in the sense that multiple distinct classes of features
all achieve similar performance. We also discover that temporal features are
predictive of a cascade's eventual shape. Observing independent cascades of the
same content, we find that while these cascades differ greatly in size, we are
still able to predict which ends up the largest
Paradoxes in Fair Computer-Aided Decision Making
Computer-aided decision making--where a human decision-maker is aided by a
computational classifier in making a decision--is becoming increasingly
prevalent. For instance, judges in at least nine states make use of algorithmic
tools meant to determine "recidivism risk scores" for criminal defendants in
sentencing, parole, or bail decisions. A subject of much recent debate is
whether such algorithmic tools are "fair" in the sense that they do not
discriminate against certain groups (e.g., races) of people.
Our main result shows that for "non-trivial" computer-aided decision making,
either the classifier must be discriminatory, or a rational decision-maker
using the output of the classifier is forced to be discriminatory. We further
provide a complete characterization of situations where fair computer-aided
decision making is possible
Explaining prosecution outcomes for cryptocurrency-based financial crimes
Purpose: Cryptocurrencies have been used to commit various offences, but enforcement efforts remain underdeveloped relative to the value of these crimes. This paper aims to examine factors associated with outcomes of US-based cryptocurrency financial crime prosecutions. // Design/methodology/approach:
The authors studied the 37 resolved cryptocurrency-based financial crime cases in the USA to date, exploring the impact of offence, defendant and evidence characteristics on the mode of disposition and penalties. The authors used bivariate analyses and logistic regression models to determine relationships among these variables. // Findings: The presence of individual defendants only (rather than a corporate defendant or combination thereof) and the use of only a cryptocurrency other than Bitcoin in committing a crime each made a case less likely to be resolved by dismissal, trial or summary or default judgement. // Originality/value: This paper is the first to examine variables contributing to financial crime prosecution outcomes and has implications for prosecutorial decision-making, resource allocation and the prevention and detection of financial offences involving cryptocurrencies
Recommended from our members
Evolving graphs: dynamical models, inverse problems and propagation
Applications such as neuroscience, telecommunication, online social networking,
transport and retail trading give rise to connectivity patterns that change over time.
In this work, we address the resulting need for network models and computational
algorithms that deal with dynamic links. We introduce a new class of evolving
range-dependent random graphs that gives a tractable framework for modelling and
simulation. We develop a spectral algorithm for calibrating a set of edge ranges from
a sequence of network snapshots and give a proof of principle illustration on some
neuroscience data. We also show how the model can be used computationally and
analytically to investigate the scenario where an evolutionary process, such as an
epidemic, takes place on an evolving network. This allows us to study the cumulative
effect of two distinct types of dynamics
Of degens and defrauders:Using open-source investigative tools to investigate decentralized finance frauds and money laundering
Fraud across the decentralized finance (DeFi) ecosystem is growing, with victims losing billions to DeFi scams every year. However, there is a disconnect between the reported value of these scams and associated legal prosecutions. We use open-source investigative tools to (1) investigate potential frauds involving Ethereum tokens using on-chain data and token smart contract analysis, and (2) investigate the ways proceeds from these scams were subsequently laundered. The analysis enabled us to (1) uncover transaction-based evidence of several rug pull and pump-and-dump schemes, and (2) identify their perpetrators' money laundering tactics and cash-out methods. The rug pulls were less sophisticated than anticipated, money laundering techniques were also rudimentary and many funds ended up at centralized exchanges. This study demonstrates how open-source investigative tools can extract transaction-based evidence that could be used in a court of law to prosecute DeFi frauds. Additionally, we investigate how these funds are subsequently laundered.& COPY; 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)
- …