867 research outputs found
Probabilistic Inductive Classes of Graphs
Models of complex networks are generally defined as graph stochastic
processes in which edges and vertices are added or deleted over time to
simulate the evolution of networks. Here, we define a unifying framework -
probabilistic inductive classes of graphs - for formalizing and studying
evolution of complex networks. Our definition of probabilistic inductive class
of graphs (PICG) extends the standard notion of inductive class of graphs (ICG)
by imposing a probability space. A PICG is given by: (1) class B of initial
graphs, the basis of PICG, (2) class R of generating rules, each with
distinguished left element to which the rule is applied to obtain the right
element, (3) probability distribution specifying how the initial graph is
chosen from class B, (4) probability distribution specifying how the rules from
class R are applied, and, finally, (5) probability distribution specifying how
the left elements for every rule in class R are chosen. We point out that many
of the existing models of growing networks can be cast as PICGs. We present how
the well known model of growing networks - the preferential attachment model -
can be studied as PICG. As an illustration we present results regarding the
size, order, and degree sequence for PICG models of connected and 2-connected
graphs.Comment: 15 pages, 6 figure
Access to post-compulsory education and training: economic, sociological and political determinants and remaining research gaps
This paper presents a review and assessment of existing theoretical accounts to explain differentials in access to education and training in advanced economies. These theories tend to focus on the analysis of the influence of a set of economic, sociological and political variables on access to education. Existing theories are criticized on two grounds. Firstly, they seldom take into consideration the crucial role of politicalâinstitutional factors, and in particular, welfare statesâ actions through direct investment and regulation in shaping access levels. Secondly, they focus narrowly on the analysis of different stages of education and training, and this does not reflect the current policy emphasis on lifelong learning. The paper concludes with an outline of a future research agenda to address these gaps, and also calls for a more rigorous analysis of the weight of the different factors affecting access
A Protocol for a Distributed Recommender System
Trusting Agents for Trusting Electronic Societie
Gridded and direct Epoch of Reionisation bispectrum estimates using the Murchison Widefield Array
We apply two methods to estimate the 21~cm bispectrum from data taken within
the Epoch of Reionisation (EoR) project of the Murchison Widefield Array (MWA).
Using data acquired with the Phase II compact array allows a direct bispectrum
estimate to be undertaken on the multiple redundantly-spaced triangles of
antenna tiles, as well as an estimate based on data gridded to the -plane.
The direct and gridded bispectrum estimators are applied to 21 hours of
high-band (167--197~MHz; =6.2--7.5) data from the 2016 and 2017 observing
seasons. Analytic predictions for the bispectrum bias and variance for point
source foregrounds are derived. We compare the output of these approaches, the
foreground contribution to the signal, and future prospects for measuring the
bispectra with redundant and non-redundant arrays. We find that some triangle
configurations yield bispectrum estimates that are consistent with the expected
noise level after 10 hours, while equilateral configurations are strongly
foreground-dominated. Careful choice of triangle configurations may be made to
reduce foreground bias that hinders power spectrum estimators, and the 21~cm
bispectrum may be accessible in less time than the 21~cm power spectrum for
some wave modes, with detections in hundreds of hours.Comment: 19 pages, 10 figures, accepted for publication in PAS
Building collaboration in multi-agent systems using reinforcement learning
© Springer Nature Switzerland AG 2018. This paper presents a proof-of concept study for demonstrating the viability of building collaboration among multiple agents through standard Q learning algorithm embedded in particle swarm optimisation. Collaboration is formulated to be achieved among the agents via competition, where the agents are expected to balance their action in such a way that none of them drifts away of the team and none intervene any fellow neighbours territory, either. Particles are devised with Q learning for self training to learn how to act as members of a swarm and how to produce collaborative/collective behaviours. The produced experimental results are supportive to the proposed idea suggesting that a substantive collaboration can be build via proposed learning algorithm
Building the New Europe: Western and Eastern Roads to Social Partnership
[Excerpt] While the ways in which neoliberalism and economic integration undermine social partnership and the welfare state have been extensively studied, less attention has been given to the ways in which such economic forces may push actors together, in reinvigorated bargaining relationships, to find workable solutions to difficult problems. In his article, we examine the contemporary status of social partnership in four case study countriesâGermany, the United Kingdom, Bulgaria and Polandâas well as for Europe as a whole. In the west, while Germany presents a case of established social partnership under pressure, the United Kingdom has stood over the past two decades on the opposite neoliberal side. In the east, Bulgaria is one of the more developed cases of post-communist tripartism, while Poland exemplifies a weaker tripartism that emerged at a later stage of the transformation process. In selecting more and less developed social partnership cases in both west and east, we test the argument that the rise of Thatcher/Reagan/ Friedman âfree market economicsâ is paradoxically driving a resurgence and consolidation of social partnership relations across the new (both western and eastern) Europe
Climate change litigation: a review of research on courts and litigants in climate government
Studies of climate change litigation have proliferated over the past two decades, as lawsuits across the world increasingly bring policy debates about climate change mitigation and adaptation, as well as climate changeârelated loss and damage to the attention of courts. We systematically identify 130 articles on climate change litigation published in English in the law and social sciences between 2000 and 2018 to identify research trajectories. In addition to a budding interdisciplinarity in scholarly interest in climate change litigation we also document a growing understanding of the full spectrum of actors involved and implicated in climate lawsuits and the range of motivations and/or strategic imperatives underpinning their engagement with the law. Situating this within the broader academic literature on the topic we then highlight a number of cutting edge trends and opportunities for future research. Four emerging themes are explored in detail: the relationship between litigation and governance; how time and scale feature in climate litigation; the role of science; and what has been coined the âhuman rights turnâ in climate change litigation. We highlight the limits of existing work and the need for future researchânot limited to legal scholarshipâto evaluate the impact of both regulatory and antiâregulatory climateârelated lawsuits, and to explore a wider set of jurisdictions, actors and themes. Addressing these issues and questions will help to develop a deeper understanding of the conditions under which litigation will strengthen or undermine climate governance. This article is categorized under: Policy and Governance > Multilevel and Transnational Climate Change Governanc
Should Research Ethics Encourage the Production of Cost-Effective Interventions?
This project considers whether and how research ethics can contribute to the provision of cost-effective medical interventions. Clinical research ethics represents an underexplored context for the promotion of cost-effectiveness. In particular, although scholars have recently argued that research on less-expensive, less-effective interventions can be ethical, there has been little or no discussion of whether ethical considerations justify curtailing research on more expensive, more effective interventions. Yet considering cost-effectiveness at the research stage can help ensure that scarce resources such as tissue samples or limited subject popula- tions are employed where they do the most good; can support parallel efforts by providers and insurers to promote cost-effectiveness; and can ensure that research has social value and benefits subjects. I discuss and rebut potential objections to the consideration of cost-effectiveness in research, including the difficulty of predicting effectiveness and cost at the research stage, concerns about limitations in cost-effectiveness analysis, and worries about overly limiting researchersâ freedom. I then consider the advantages and disadvantages of having certain participants in the research enterprise, including IRBs, advisory committees, sponsors, investigators, and subjects, consider cost-effectiveness. The project concludes by qualifiedly endorsing the consideration of cost-effectiveness at the research stage. While incorporating cost-effectiveness considerations into the ethical evaluation of human subjects research will not on its own ensure that the health care system realizes cost-effectiveness goals, doing so nonetheless represents an important part of a broader effort to control rising medical costs
Optimising the glaucoma signal/noise ratio by mapping changes in spatial summation with area-modulated perimetric stimuli
Identification of glaucomatous damage and progression by perimetry are limited by measurement and response variability. This study tested the hypothesis that the glaucoma damage signal/noise ratio is greater with stimuli varying in area, either solely, or simultaneously with contrast, than with conventional stimuli varying in contrast only (Goldmann III, GIII). Thirty glaucoma patients and 20 age-similar healthy controls were tested with the Method of Constant Stimuli (MOCS). One stimulus modulated in area (A), one modulated in contrast within Ricco's area (C R ), one modulated in both area and contrast simultaneously (AC), and the reference stimulus was a GIII, modulating in contrast. Stimuli were presented on a common platform with a common scale (energy). A three-stage protocol minimised artefactual MOCS slope bias that can occur due to differences in psychometric function sampling between conditions. Threshold difference from age-matched normal (total deviation), response variability, and signal/noise ratio were compared between stimuli. Total deviation was greater with, and response variability less dependent on defect depth with A, AC, and C R stimuli, compared with GIII. Both A and AC stimuli showed a significantly greater signal/noise ratio than the GIII, indicating that area-modulated stimuli offer benefits over the GIII for identifying early glaucoma and measuring progression
- âŠ