922 research outputs found
Computing Stable Coalitions: Approximation Algorithms for Reward Sharing
Consider a setting where selfish agents are to be assigned to coalitions or
projects from a fixed set P. Each project k is characterized by a valuation
function; v_k(S) is the value generated by a set S of agents working on project
k. We study the following classic problem in this setting: "how should the
agents divide the value that they collectively create?". One traditional
approach in cooperative game theory is to study core stability with the
implicit assumption that there are infinite copies of one project, and agents
can partition themselves into any number of coalitions. In contrast, we
consider a model with a finite number of non-identical projects; this makes
computing both high-welfare solutions and core payments highly non-trivial.
The main contribution of this paper is a black-box mechanism that reduces the
problem of computing a near-optimal core stable solution to the purely
algorithmic problem of welfare maximization; we apply this to compute an
approximately core stable solution that extracts one-fourth of the optimal
social welfare for the class of subadditive valuations. We also show much
stronger results for several popular sub-classes: anonymous, fractionally
subadditive, and submodular valuations, as well as provide new approximation
algorithms for welfare maximization with anonymous functions. Finally, we
establish a connection between our setting and the well-studied simultaneous
auctions with item bidding; we adapt our results to compute approximate pure
Nash equilibria for these auctions.Comment: Under Revie
Collective emotions online and their influence on community life
E-communities, social groups interacting online, have recently become an
object of interdisciplinary research. As with face-to-face meetings, Internet
exchanges may not only include factual information but also emotional
information - how participants feel about the subject discussed or other group
members. Emotions are known to be important in affecting interaction partners
in offline communication in many ways. Could emotions in Internet exchanges
affect others and systematically influence quantitative and qualitative aspects
of the trajectory of e-communities? The development of automatic sentiment
analysis has made large scale emotion detection and analysis possible using
text messages collected from the web. It is not clear if emotions in
e-communities primarily derive from individual group members' personalities or
if they result from intra-group interactions, and whether they influence group
activities. We show the collective character of affective phenomena on a large
scale as observed in 4 million posts downloaded from Blogs, Digg and BBC
forums. To test whether the emotions of a community member may influence the
emotions of others, posts were grouped into clusters of messages with similar
emotional valences. The frequency of long clusters was much higher than it
would be if emotions occurred at random. Distributions for cluster lengths can
be explained by preferential processes because conditional probabilities for
consecutive messages grow as a power law with cluster length. For BBC forum
threads, average discussion lengths were higher for larger values of absolute
average emotional valence in the first ten comments and the average amount of
emotion in messages fell during discussions. Our results prove that collective
emotional states can be created and modulated via Internet communication and
that emotional expressiveness is the fuel that sustains some e-communities.Comment: 23 pages including Supporting Information, accepted to PLoS ON
A framework and a measurement instrument for sustainability of work practices in long-term care
<p>Abstract</p> <p>Background</p> <p>In health care, many organizations are working on quality improvement and/or innovation of their care practices. Although the effectiveness of improvement processes has been studied extensively, little attention has been given to sustainability of the changed work practices after implementation. The objective of this study is to develop a theoretical framework and measurement instrument for sustainability. To this end sustainability is conceptualized with two dimensions: routinization and institutionalization.</p> <p>Methods</p> <p>The exploratory methodological design consisted of three phases: a) framework development; b) instrument development; and c) field testing in former improvement teams in a quality improvement program for health care (N <sub>teams </sub>= 63, N <sub>individual </sub>= 112). Data were collected not until at least one year had passed after implementation.</p> <p>Underlying constructs and their interrelations were explored using Structural Equation Modeling and Principal Component Analyses. Internal consistency was computed with Cronbach's alpha coefficient. A long and a short version of the instrument are proposed.</p> <p>Results</p> <p>The χ<sup>2</sup>- difference test of the -2 Log Likelihood estimates demonstrated that the hierarchical two factor model with routinization and institutionalization as separate constructs showed a better fit than the one factor model (p < .01). Secondly, construct validity of the instrument was strong as indicated by the high factor loadings of the items. Finally, the internal consistency of the subscales was good.</p> <p>Conclusions</p> <p>The theoretical framework offers a valuable starting point for the analysis of sustainability on the level of actual changed work practices. Even though the two dimensions routinization and institutionalization are related, they are clearly distinguishable and each has distinct value in the discussion of sustainability. Finally, the subscales conformed to psychometric properties defined in literature. The instrument can be used in the evaluation of improvement projects.</p
Exploring Norms in Agile Software Teams
The majority of software developers work in teams and are thus influenced by team norms. Norms are shared expectations of how to behave and regulate the interaction between team members. Our aim of this study is to gain more knowledge about team norms in software teams and to increase the understanding of how norms influence teamwork in agile software development projects. We conducted a study of norms in four agile teams located in Norway and Malaysia. The analysis of 22 interviews revealed that we could extract a varied set of both injunctive and descriptive norms. Our results suggest that team norms have an important role in enabling team performance.acceptedVersio
Recommended from our members
Risk measures for direct real estate investments with non-normal or unknown return distributions
The volatility of returns is probably the most widely used risk measure for real estate. This is rather surprising since a number of studies have cast doubts on the view that volatility can capture the manifold risks attached to properties and corresponds to the risk attitude of investors. A central issue in this discussion is the statistical properties of real estate returns—in contrast to neoclassical capital market theory they are mostly non-normal and often unknown, which render many statistical measures useless. Based on a literature review and an analysis of data from Germany we provide evidence that volatility alone is inappropriate for measuring the risk of direct real estate.
We use a unique data sample by IPD, which includes the total returns of 939 properties across different usage types (56% office, 20% retail, 8% others and 16% residential properties) from 1996 to 2009, the German IPD Index, and the German Property Index. The analysis of the distributional characteristics shows that German real estate returns in this period were not normally distributed and that a logistic distribution would have been a better fit. This is in line with most of the current literature on this subject and leads to the question which indicators are more appropriate to measure real estate risks. We suggest that a combination of quantitative and qualitative risk measures more adequately captures real estate risks and conforms better with investor attitudes to risk. Furthermore, we present criteria for the purpose of risk classification
Quality gap of educational services in viewpoints of students in Hormozgan University of medical sciences
<p>Abstract</p> <p>Background</p> <p>Higher education is growing fast and every day it becomes more and more exposed to globalization processes. The aim of this study was to determine the quality gap of educational services by using a modified SERVQUAL instrument among students in Hormozgan University of Medical Sciences.</p> <p>Methods</p> <p>A cross-sectional study was carried out at Hormozgan University of Medical Sciences in 2007. In this study, a total of 300 students were selected randomly and asked to complete a questionnaire that was designed according to SERVQUAL methods. This questionnaire measured students' perceptions and expectations in five dimensions of service that consists of assurance, responsiveness, empathy, reliability and tangibles. The quality gap of educational services was determined based on differences between students' perceptions and expectations.</p> <p>Results</p> <p>The results demonstrated that in each of the five SERVQUAL dimensions, there was a negative quality gap. The least and the most negative quality gap means were in the reliability (-0.71) and responsiveness (-1.14) dimensions respectively. Also, there were significant differences between perceptions and expectations of students in all of the five SERVQUAL dimensions (p < 0.001).</p> <p>Conclusion</p> <p>Negative quality gaps mean students' expectations exceed their perceptions. Thus, improvements are needed across all five dimensions.</p
Editorial Peer Reviewers' Recommendations at a General Medical Journal: Are They Reliable and Do Editors Care?
BACKGROUND: Editorial peer review is universally used but little studied. We examined the relationship between external reviewers' recommendations and the editorial outcome of manuscripts undergoing external peer-review at the Journal of General Internal Medicine (JGIM). METHODOLOGY/PRINCIPAL FINDINGS: We examined reviewer recommendations and editors' decisions at JGIM between 2004 and 2008. For manuscripts undergoing peer review, we calculated chance-corrected agreement among reviewers on recommendations to reject versus accept or revise. Using mixed effects logistic regression models, we estimated intra-class correlation coefficients (ICC) at the reviewer and manuscript level. Finally, we examined the probability of rejection in relation to reviewer agreement and disagreement. The 2264 manuscripts sent for external review during the study period received 5881 reviews provided by 2916 reviewers; 28% of reviews recommended rejection. Chance corrected agreement (kappa statistic) on rejection among reviewers was 0.11 (p<.01). In mixed effects models adjusting for study year and manuscript type, the reviewer-level ICC was 0.23 (95% confidence interval [CI], 0.19-0.29) and the manuscript-level ICC was 0.17 (95% CI, 0.12-0.22). The editors' overall rejection rate was 48%: 88% when all reviewers for a manuscript agreed on rejection (7% of manuscripts) and 20% when all reviewers agreed that the manuscript should not be rejected (48% of manuscripts) (p<0.01). CONCLUSIONS/SIGNIFICANCE: Reviewers at JGIM agreed on recommendations to reject vs. accept/revise at levels barely beyond chance, yet editors placed considerable weight on reviewers' recommendations. Efforts are needed to improve the reliability of the peer-review process while helping editors understand the limitations of reviewers' recommendations
Integrating complementary and alternative medicine into academic medical centers: Experience and perceptions of nine leading centers in North America
BACKGROUND: Patients across North America are using complementary and alternative medicine (CAM) with increasing frequency as part of their management of many different health conditions. The objective of this study was to develop a guide for academic health sciences centers that may wish to consider starting an integrative medicine program. METHODS: We queried North American leaders in the field of integrative medicine to identify initial sites. Key stakeholders at each of the initial sites visited were then asked to identify additional potential study sites (snowball sampling), until no new sites were identified. We conducted structured interviews to identify critical factors associated with success and failure in each of four domains: research, education, clinical care, and administration. During the interviews, field notes were recorded independently by at least two investigators. Team meetings were held after each visit to reach consensus on the information recorded and to ensure that it was as complete as possible. Content analysis techniques were used to identify key themes that emerged from the field notes. RESULTS: We identified ten leading North American integrative medical centers, and visited nine during 2002–2003. The centers visited suggested that the initiation of an integrative medicine program requires a significant initial outlay of funding and a motivated "champion". The centers had important information to share regarding credentialing, medico-legal issues and billing for clinical programs; identifying researchers and research projects for a successful research program; and strategies for implementing flexible educational initiatives and establishing a functional administrative structure. CONCLUSION: Important lessons can be learned from academic integrative programs already in existence. Such initiatives are timely and feasible in a variety of different ways and in a variety of settings
Complex Processes from Dynamical Architectures with Time-Scale Hierarchy
The idea that complex motor, perceptual, and cognitive behaviors are composed of smaller units, which are somehow brought into a meaningful relation, permeates the biological and life sciences. However, no principled framework defining the constituent elementary processes has been developed to this date. Consequently, functional configurations (or architectures) relating elementary processes and external influences are mostly piecemeal formulations suitable to particular instances only. Here, we develop a general dynamical framework for distinct functional architectures characterized by the time-scale separation of their constituents and evaluate their efficiency. Thereto, we build on the (phase) flow of a system, which prescribes the temporal evolution of its state variables. The phase flow topology allows for the unambiguous classification of qualitatively distinct processes, which we consider to represent the functional units or modes within the dynamical architecture. Using the example of a composite movement we illustrate how different architectures can be characterized by their degree of time scale separation between the internal elements of the architecture (i.e. the functional modes) and external interventions. We reveal a tradeoff of the interactions between internal and external influences, which offers a theoretical justification for the efficient composition of complex processes out of non-trivial elementary processes or functional modes
Healthcare costs and utilization for Medicare beneficiaries with Alzheimer's
<p>Abstract</p> <p>Background</p> <p>Alzheimer's disease (AD) is a neurodegenerative disorder incurring significant social and economic costs. This study uses a US administrative claims database to evaluate the effect of AD on direct healthcare costs and utilization, and to identify the most common reasons for AD patients' emergency room (ER) visits and inpatient admissions.</p> <p>Methods</p> <p>Demographically matched cohorts age 65 and over with comprehensive medical and pharmacy claims from the 2003–2004 MEDSTAT MarketScan<sup>® </sup>Medicare Supplemental and Coordination of Benefits (COB) Database were examined: 1) 25,109 individuals with an AD diagnosis or a filled prescription for an exclusively AD treatment; and 2) 75,327 matched controls. Illness burden for each person was measured using Diagnostic Cost Groups (DCGs), a comprehensive morbidity assessment system. Cost distributions and reasons for ER visits and inpatient admissions in 2004 were compared for both cohorts. Regression was used to quantify the marginal contribution of AD to health care costs and utilization, and the most common reasons for ER and inpatient admissions, using DCGs to control for overall illness burden.</p> <p>Results</p> <p>Compared with controls, the AD cohort had more co-morbid medical conditions, higher overall illness burden, and higher but less variable costs (10,369; Coefficient of variation = 181 vs. 324). Significant excess utilization was attributed to AD for inpatient services, pharmacy, ER visits, and home health care (all p < 0.05). In particular, AD patients were far more likely to be hospitalized for infections, pneumonia and falls (hip fracture, syncope, collapse).</p> <p>Conclusion</p> <p>Patients with AD have significantly more co-morbid medical conditions and higher healthcare costs and utilization than demographically-matched Medicare beneficiaries. Even after adjusting for differences in co-morbidity, AD patients incur excess ER visits and inpatient admissions.</p
- …