45 research outputs found
A Multi-Dimensional Approach for Framing Crowdsourcing Archetypes
All different kinds of organizations â business, public, and non-governmental alike â are becoming aware of a soaring complexity in problem solving, decision making and idea development. In a multitude of circumstances, multidisciplinary teams, high-caliber skilled resources and world-class computer suites do not suffice to cope with such a complexity: in fact, a further need concerns the sharing and âexternalizationâ of tacit knowledge already existing in the society. In this direction, participatory tendencies flourishing in the interconnected society in which we live today lead âcollective intelligenceâ to emerge as key ingredient of distributed problem solving systems going well beyond the traditional boundaries of organizations. Resulting outputs can remarkably enrich decision processes and creative processes carried out by indoor experts, allowing organizations to reap benefits in terms of opportunity, time and cost.
Taking stock of the mare magnum of promising opportunities to be tapped, of the inherent diversity lying among them, and of the enormous success of some initiative launched hitherto, the thesis aspires to provide a sound basis for the clear comprehension and systematic exploitation of crowdsourcing.
After a thorough literature review, the thesis explores new ways for formalizing crowdsourcing models with the aim of distilling a brand-new multi-dimensional framework to categorize various crowdsourcing archetypes. To say it in a nutshell, the proposed framework combines two dimensions (i.e., motivations to participate and organization of external solvers) in order to portray six archetypes. Among the numerous significant elements of novelty brought by this framework, the prominent one is the âholisticâ approach that combines both profit and non-profit, trying to put private and public sectors under a common roof in order to examine in a whole corpus the multi-faceted mechanisms for mobilizing and harnessing competence and expertise which are distributed among the crowd.
Looking at how the crowd may be turned into value to be internalized by organizations, the thesis examines crowdsourcing practices in the public as well in the private sector. Regarding the former, the investigation leverages the experience into the PADGETS project through action research â drawing on theoretical studies as well as on intensive fieldwork activities â to systematize how crowdsourcing can be fruitfully incorporated into the policy lifecycle. Concerning the private realm, a cohort of real cases in the limelight is examined â having recourse to case study methodology â to formalize different ways through which crowdsourcing becomes a business model game-changer.
Finally, the two perspectives (i.e., public and private) are coalesced into an integrated view acting as a backdrop for proposing next-generation governance model massively hinged on crowdsourcing. In fact, drawing on archetypes schematized, the thesis depicts a potential paradigm that government may embrace in the coming future to tap the potential of collective intelligence, thus maximizing the utilization of a resource that today seems certainly underexploited
Modelling the evolution of transcription factor binding preferences in complex eukaryotes
Transcription factors (TFs) exert their regulatory action by binding to DNA
with specific sequence preferences. However, different TFs can partially share
their binding sequences due to their common evolutionary origin. This
`redundancy' of binding defines a way of organizing TFs in `motif families' by
grouping TFs with similar binding preferences. Since these ultimately define
the TF target genes, the motif family organization entails information about
the structure of transcriptional regulation as it has been shaped by evolution.
Focusing on the human TF repertoire, we show that a one-parameter evolutionary
model of the Birth-Death-Innovation type can explain the TF empirical
ripartition in motif families, and allows to highlight the relevant
evolutionary forces at the origin of this organization. Moreover, the model
allows to pinpoint few deviations from the neutral scenario it assumes: three
over-expanded families (including HOX and FOX genes), a set of `singleton' TFs
for which duplication seems to be selected against, and a higher-than-average
rate of diversification of the binding preferences of TFs with a Zinc Finger
DNA binding domain. Finally, a comparison of the TF motif family organization
in different eukaryotic species suggests an increase of redundancy of binding
with organism complexity.Comment: 14 pages, 5 figures. Minor changes. Final version, accepted for
publicatio
Stochastic timing in gene expression for simple regulatory strategies
Timing is essential for many cellular processes, from cellular responses to
external stimuli to the cell cycle and circadian clocks. Many of these
processes are based on gene expression. For example, an activated gene may be
required to reach in a precise time a threshold level of expression that
triggers a specific downstream process. However, gene expression is subject to
stochastic fluctuations, naturally inducing an uncertainty in this
threshold-crossing time with potential consequences on biological functions and
phenotypes. Here, we consider such "timing fluctuations", and we ask how they
can be controlled. Our analytical estimates and simulations show that, for an
induced gene, timing variability is minimal if the threshold level of
expression is approximately half of the steady-state level. Timing fuctuations
can be reduced by increasing the transcription rate, while they are insensitive
to the translation rate. In presence of self-regulatory strategies, we show
that self-repression reduces timing noise for threshold levels that have to be
reached quickly, while selfactivation is optimal at long times. These results
lay a framework for understanding stochasticity of endogenous systems such as
the cell cycle, as well as for the design of synthetic trigger circuits.Comment: 10 pages, 5 figure
Gene autoregulation via intronic microRNAs and its functions
Background: MicroRNAs, post-transcriptional repressors of gene expression,
play a pivotal role in gene regulatory networks. They are involved in core
cellular processes and their dysregulation is associated to a broad range of
human diseases. This paper focus on a minimal microRNA-mediated regulatory
circuit, in which a protein-coding gene (host gene) is targeted by a microRNA
located inside one of its introns. Results: Autoregulation via intronic
microRNAs is widespread in the human regulatory network, as confirmed by our
bioinformatic analysis, and can perform several regulatory tasks despite its
simple topology. Our analysis, based on analytical calculations and
simulations, indicates that this circuitry alters the dynamics of the host gene
expression, can induce complex responses implementing adaptation and Weber's
law, and efficiently filters fluctuations propagating from the upstream network
to the host gene. A fine-tuning of the circuit parameters can optimize each of
these functions. Interestingly, they are all related to gene expression
homeostasis, in agreement with the increasing evidence suggesting a role of
microRNA regulation in conferring robustness to biological processes. In
addition to model analysis, we present a list of bioinformatically predicted
candidate circuits in human for future experimental tests. Conclusions: The
results presented here suggest a potentially relevant functional role for
negative self-regulation via intronic microRNAs, in particular as a homeostatic
control mechanism of gene expression. Moreover, the map of circuit functions in
terms of experimentally measurable parameters, resulting from our analysis, can
be a useful guideline for possible applications in synthetic biology.Comment: 29 pages and 7 figures in the main text, 18 pages of Supporting
Informatio
Statistics of shared components in complex component systems
Many complex systems are modular. Such systems can be represented as
"component systems", i.e., sets of elementary components, such as LEGO bricks
in LEGO sets. The bricks found in a LEGO set reflect a target architecture,
which can be built following a set-specific list of instructions. In other
component systems, instead, the underlying functional design and constraints
are not obvious a priori, and their detection is often a challenge of both
scientific and practical importance, requiring a clear understanding of
component statistics. Importantly, some quantitative invariants appear to be
common to many component systems, most notably a common broad distribution of
component abundances, which often resembles the well-known Zipf's law. Such
"laws" affect in a general and non-trivial way the component statistics,
potentially hindering the identification of system-specific functional
constraints or generative processes. Here, we specifically focus on the
statistics of shared components, i.e., the distribution of the number of
components shared by different system-realizations, such as the common bricks
found in different LEGO sets. To account for the effects of component
heterogeneity, we consider a simple null model, which builds
system-realizations by random draws from a universe of possible components.
Under general assumptions on abundance heterogeneity, we provide analytical
estimates of component occurrence, which quantify exhaustively the statistics
of shared components. Surprisingly, this simple null model can positively
explain important features of empirical component-occurrence distributions
obtained from data on bacterial genomes, LEGO sets, and book chapters. Specific
architectural features and functional constraints can be detected from
occurrence patterns as deviations from these null predictions, as we show for
the illustrative case of the "core" genome in bacteria.Comment: 18 pages, 7 main figures, 7 supplementary figure
Heaps' law, statistics of shared components and temporal patterns from a sample-space-reducing process
Zipf's law is a hallmark of several complex systems with a modular structure,
such as books composed by words or genomes composed by genes. In these
component systems, Zipf's law describes the empirical power law distribution of
component frequencies. Stochastic processes based on a sample-space-reducing
(SSR) mechanism, in which the number of accessible states reduces as the system
evolves, have been recently proposed as a simple explanation for the ubiquitous
emergence of this law. However, many complex component systems are
characterized by other statistical patterns beyond Zipf's law, such as a
sublinear growth of the component vocabulary with the system size, known as
Heap's law, and a specific statistics of shared components. This work shows,
with analytical calculations and simulations, that these statistical properties
can emerge jointly from a SSR mechanism, thus making it an appropriate
parameter-poor representation for component systems. Several alternative (and
equally simple) models, for example based on the preferential attachment
mechanism, can also reproduce Heaps' and Zipf's laws, suggesting that
additional statistical properties should be taken into account to select the
most-likely generative process for a specific system. Along this line, we will
show that the temporal component distribution predicted by the SSR model is
markedly different from the one emerging from the popular rich-gets-richer
mechanism. A comparison with empirical data from natural language indicates
that the SSR process can be chosen as a better candidate model for text
generation based on this statistical property. Finally, a limitation of the SSR
model in reproducing the empirical "burstiness" of word appearances in texts
will be pointed out, thus indicating a possible direction for extensions of the
basic SSR process.Comment: 14 pages, 4 figure
Evaluating Advanced Forms of Social Media Use in Government
Government agencies gradually start moving from simpler to more advanced forms of social media use, which are characterized by higher technological and political complexity. It is important to evaluate systematically these efforts based on sound theoretical foundations. In this direction this paper outlines and evaluates an advanced form of automated and centrally managed combined use of multiple social media by government agencies for promoting participative public policy making. For this purpose an evaluation framework has been developed, which includes both technological and political evaluation, and focuses on the fundamental complexities and challenges of these advanced forms of social media exploitation. It has been used for the evaluation of a pilot application of this approach for conducting a consultation campaign concerning the large scale application of a telemedicine program in Piedmont, Italy, revealing its important potential and strengths, and at the same time some notable problems and weaknesses as well
Knowledge Graph Embeddings with node2vec for Item Recommendation
In the past years, knowledge graphs have proven to be beneficial
for recommender systems, efficiently addressing paramount issues
such as new items and data sparsity. Graph embeddings algorithms have
shown to be able to automatically learn high quality feature vectors
from graph structures, enabling vector-based measures of node relatedness.
In this paper, we show how node2vec can be used to generate item
recommendations by learning knowledge graph embeddings. We apply
node2vec on a knowledge graph built from the MovieLens 1M dataset
and DBpedia and use the node relatedness to generate item recommendations.
The results show that node2vec consistently outperforms a set
of collaborative filtering baselines on an array of relevant metric
An empirical comparison of knowledge graph embeddings for item recommendation
In the past years, knowledge graphs have proven to be beneficial
for recommender systems, efficiently addressing paramount issues
such as new items and data sparsity. At the same time, several works have
recently tackled the problem of knowledge graph completion through machine
learning algorithms able to learn knowledge graph embeddings. In
this paper, we show that the item recommendation problem can be seen
as a specific case of knowledge graph completion problem, where the
âfeedbackâ property, which connects users to items that they like, has to
be predicted. We empirically compare a set of state-of-the-art knowledge
graph embeddings algorithms on the task of item recommendation on
the Movielens 1M dataset. The results show that knowledge graph embeddings
models outperform traditional collaborative filtering baselines
and that TransH obtains the best performance