18 research outputs found

    Games played on Australian university alliance websites as they collaborate and compete

    Get PDF
    As competition for funding, staff and students increases and the game becomes more complex, universities across the sector are required to identify new and more elaborate ways of competing. The development of university ranking systems has encouraged this competitive game. The relevance of university rankings is discussed. The paper concludes with a reflection on why, despite their apparent importance as capital these international and national rankings are predominantly absent from the alliances’ websites. These explorations provide an insight into the complex games universities play in order to collaborate and to compete for government, private and research funding and for staff and students

    Rank Incentives, Social Tournaments, Feedback, Field Experiment

    Get PDF
    Performance rankings are a very common workplace management practice. Behavioral theories suggest that providing performance rankings to employees, even without pecuniary consequences, may directly shape effort due to the rank’s effect on self-image. In a three-year randomized control trial with full-time furniture salespeople (n=1754), I study the effect on sales performance in a two-by-two experimental design where I vary (i) whether to privately inform employees about their performance rank; and (ii)whether to give benchmarks, i.e. data on the current performance required to be in the top 10%, 25% and 50%. The salespeople’s compensation is only based on absolute performance via a high-powered commission scheme in which rankings convey no direct additional financial benefits. There are two important innovations in this experiment. First, prior to the start of the experiment all salespeople were told their performance ranking. Second, employees operate in a multi-tasking environment where they can sell multiple brands. There are four key results: First, removing rank feedback actually increases sales performance by 11%, or 1/10th of a standard deviation. Second, only men (not women) change their performance. Third, adding benchmarks to rank feedback significantly raises performance, but it is not significantly different from providing no feedback. Fourth, as predicted by the multi-tasking model, the treatment effect increases with the scope for effort substitution across furniture brands as employees switch their effort to other tasks when their rank is worse than expected

    Social Choice for Partial Preferences Using Imputation

    Get PDF
    Within the field of multiagent systems, the area of computational social choice considers the problems arising when decisions must be made collectively by a group of agents. Usually such systems collect a ranking of the alternatives from each member of the group in turn, and aggregate these individual rankings to arrive at a collective decision. However, when there are many alternatives to consider, individual agents may be unwilling, or unable, to rank all of them, leading to decisions that must be made on the basis of incomplete information. While earlier approaches attempt to work with the provided rankings by making assumptions about the nature of the missing information, this can lead to undesirable outcomes when the assumptions do not hold, and is ill-suited to certain problem domains. In this thesis, we propose a new approach that uses machine learning algorithms (both conventional and purpose-built) to generate plausible completions of each agent’s rankings on the basis of the partial rankings the agent provided (imputations), in a way that reflects the agents’ true preferences. We show that the combination of existing social choice functions with certain classes of imputation algorithms, which forms the core of our proposed solution, is equivalent to a form of social choice. Our system then undergoes an extensive empirical validation under 40 different test conditions, involving more than 50,000 group decision problems generated from real-world electoral data, and is found to outperform existing competitors significantly, leading to better group decisions overall. Detailed empirical findings are also used to characterize the behaviour of the system, and illustrate the circumstances in which it is most advantageous. A general testbed for comparing solutions using real-world and artificial data (Prefmine) is then described, in conjunction with results that justify its design decisions. We move on to propose a new machine learning algorithm intended specifically to learn and impute the preferences of agents, and validate its effectiveness. This Markov-Tree approach is demonstrated to be superior to imputation using conventional machine learning, and has a simple interpretation that characterizes the problems on which it will perform well. Later chapters contain an axiomatic validation of both of our new approaches, as well as techniques for mitigating their manipulability. The thesis concludes with a discussion of the applicability of its contributions, both for multiagent systems and for settings involving human elections. In all, we reveal an interesting connection between machine learning and computational social choice, and introduce a testbed which facilitates future research efforts on computational social choice for partial preferences, by allowing empirical comparisons between competing approaches to be conducted easily, accurately, and quickly. Perhaps most importantly, we offer an important and effective new direction for enabling group decision making when preferences are not completely specified, using imputation methods

    Linked Data Quality Assessment and its Application to Societal Progress Measurement

    Get PDF
    In recent years, the Linked Data (LD) paradigm has emerged as a simple mechanism for employing the Web as a medium for data and knowledge integration where both documents and data are linked. Moreover, the semantics and structure of the underlying data are kept intact, making this the Semantic Web. LD essentially entails a set of best practices for publishing and connecting structure data on the Web, which allows publish- ing and exchanging information in an interoperable and reusable fashion. Many different communities on the Internet such as geographic, media, life sciences and government have already adopted these LD principles. This is confirmed by the dramatically growing Linked Data Web, where currently more than 50 billion facts are represented. With the emergence of Web of Linked Data, there are several use cases, which are possible due to the rich and disparate data integrated into one global information space. Linked Data, in these cases, not only assists in building mashups by interlinking heterogeneous and dispersed data from multiple sources but also empowers the uncovering of meaningful and impactful relationships. These discoveries have paved the way for scientists to explore the existing data and uncover meaningful outcomes that they might not have been aware of previously. In all these use cases utilizing LD, one crippling problem is the underlying data quality. Incomplete, inconsistent or inaccurate data affects the end results gravely, thus making them unreliable. Data quality is commonly conceived as fitness for use, be it for a certain application or use case. There are cases when datasets that contain quality problems, are useful for certain applications, thus depending on the use case at hand. Thus, LD consumption has to deal with the problem of getting the data into a state in which it can be exploited for real use cases. The insufficient data quality can be caused either by the LD publication process or is intrinsic to the data source itself. A key challenge is to assess the quality of datasets published on the Web and make this quality information explicit. Assessing data quality is particularly a challenge in LD as the underlying data stems from a set of multiple, autonomous and evolving data sources. Moreover, the dynamic nature of LD makes assessing the quality crucial to ïżŒïżŒmeasure the accuracy of representing the real-world data. On the document Web, data quality can only be indirectly or vaguely defined, but there is a requirement for more concrete and measurable data quality metrics for LD. Such data quality metrics include correctness of facts wrt. the real-world, adequacy of semantic representation, quality of interlinks, interoperability, timeliness or consistency with regard to implicit information. Even though data quality is an important concept in LD, there are few methodologies proposed to assess the quality of these datasets. Thus, in this thesis, we first unify 18 data quality dimensions and provide a total of 69 metrics for assessment of LD. The first methodology includes the employment of LD experts for the assessment. This assessment is performed with the help of the TripleCheckMate tool, which was developed specifically to assist LD experts for assessing the quality of a dataset, in this case DBpedia. The second methodology is a semi-automatic process, in which the first phase involves the detection of common quality problems by the automatic creation of an extended schema for DBpedia. The second phase involves the manual verification of the generated schema axioms. Thereafter, we employ the wisdom of the crowds i.e. workers for online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) to assess the quality of DBpedia. We then compare the two approaches (previous assessment by LD experts and assessment by MTurk workers in this study) in order to measure the feasibility of each type of the user-driven data quality assessment methodology. Additionally, we evaluate another semi-automated methodology for LD quality assessment, which also involves human judgement. In this semi-automated methodology, selected metrics are formally defined and implemented as part of a tool, namely R2RLint. The user is not only provided the results of the assessment but also specific entities that cause the errors, which help users understand the quality issues and thus can fix them. Finally, we take into account a domain-specific use case that consumes LD and leverages on data quality. In particular, we identify four LD sources, assess their quality using the R2RLint tool and then utilize them in building the Health Economic Research (HER) Observatory. We show the advantages of this semi-automated assessment over the other types of quality assessment methodologies discussed earlier. The Observatory aims at evaluating the impact of research development on the economic and healthcare performance of each country per year. We illustrate the usefulness of LD in this use case and the importance of quality assessment for any data analysis

    Study on open science: The general state of the play in Open Science principles and practices at European life sciences institutes

    Get PDF
    Nowadays, open science is a hot topic on all levels and also is one of the priorities of the European Research Area. Components that are commonly associated with open science are open access, open data, open methodology, open source, open peer review, open science policies and citizen science. Open science may a great potential to connect and influence the practices of researchers, funding institutions and the public. In this paper, we evaluate the level of openness based on public surveys at four European life sciences institute

    Santa Clara Magazine, Volume 51 Number 4, Spring 2010

    Get PDF
    14 - BENDING LIGHT By Steven Boyd Saum. They wanted to show that green living is not a compromise. So, for the international Solar Decathlon, the SCU-led Team California built a house of light and wonder. And it was dazzling enough to win No. 3 on the planet. 22 - CONNECT THE DOTS By Scott Brown \u2793. From border security to disaster preparedness, Secretary of Homeland Security Janet Napolitano \u2779 has one immense portfolio. She\u27s also the point person on immigration. How to put those together? 28 - THIS PLACE WE CALL HOME By Kristina Chiapella \u2709 \u2709. Generations ago, Native Americans in the Bay Area lost their land-and the land lost them. But that is hardly the end of the story. 34 - BREAKING BREAD By Dona Leyva. For Claudia Pruett \u2783, MBA \u2787, it\u27s a family affair wrapped in love and tradition-including 50 years of serving lasagna to SCU econ majors by her parents, Rose and Mario Belotti.https://scholarcommons.scu.edu/sc_mag/1004/thumbnail.jp

    June 25, 2016 (Weekend) Daily Journal

    Get PDF
    corecore