70,761 research outputs found
A new structured mathematical model of the composting process
Postprint (published version
Thermal expansion of mantle minerals at high pressures - A theoretical study
Recent experimental work has shown that the pressure dependence of the thermal expansion coefficient can be expressed as:
( α / α 0 ) = ( ρ / ρ 0 ( - δ T ( 1 )
where δ_{T}, the Anderson‐Gruneisen parameter, is assumed to be independent of pressure, and for the materials studied has a value that lies between 4 and 6. Calculation of δ_{T} from seismic data, however, appears to suggest a contradictory value of between 2 and 3 for mantle‐forming phases. Using an atomistic model based on our previously successful many‐body interatomic potential set (THBl), we have performed calculations to obtain values of δ_{T} for four major mantle‐forming minerals. Our model results are in excellent agreement with experimental data, yielding values of between 4 and 6 for forsterite and MgO, and values in the same range for MgSiO_{3-}perovskite and γ‐Mg_{2}SiO_{4}. Moreover, the calculations confirm that δ_{T} is indeed constant with pressure up to the core‐mantle boundary. The apparent conflict between the values of δ_{T} predicted from seismic data and those obtained from experiment, and now from theory, is discussed
Matching under Preferences
Matching theory studies how agents and/or objects from different sets can be matched with each other while taking agents\u2019 preferences into account. The theory originated in 1962 with a celebrated paper by David Gale and Lloyd Shapley (1962), in which they proposed the Stable Marriage Algorithm as a solution to the problem of two-sided matching. Since then, this theory has been successfully applied to many real-world problems such as matching students to universities, doctors to hospitals, kidney transplant patients to donors, and tenants to houses. This chapter will focus on algorithmic as well as strategic issues of matching theory.
Many large-scale centralized allocation processes can be modelled by matching problems where agents have preferences over one another. For example, in China, over 10 million students apply for admission to higher education annually through a centralized process. The inputs to the matching scheme include the students\u2019 preferences over universities, and vice versa, and the capacities of each university. The task is to construct a matching that is in some sense optimal with respect to these inputs.
Economists have long understood the problems with decentralized matching markets, which can suffer from such undesirable properties as unravelling, congestion and exploding offers (see Roth and Xing, 1994, for details). For centralized markets, constructing allocations by hand for large problem instances is clearly infeasible. Thus centralized mechanisms are required for automating the allocation process.
Given the large number of agents typically involved, the computational efficiency of a mechanism's underlying algorithm is of paramount importance. Thus we seek polynomial-time algorithms for the underlying matching problems. Equally important are considerations of strategy: an agent (or a coalition of agents) may manipulate their input to the matching scheme (e.g., by misrepresenting their true preferences or underreporting their capacity) in order to try to improve their outcome. A desirable property of a mechanism is strategyproofness, which ensures that it is in the best interests of an agent to behave truthfully
How to Work with Honest but Curious Judges? (Preliminary Report)
The three-judges protocol, recently advocated by Mclver and Morgan as an
example of stepwise refinement of security protocols, studies how to securely
compute the majority function to reach a final verdict without revealing each
individual judge's decision. We extend their protocol in two different ways for
an arbitrary number of 2n+1 judges. The first generalisation is inherently
centralised, in the sense that it requires a judge as a leader who collects
information from others, computes the majority function, and announces the
final result. A different approach can be obtained by slightly modifying the
well-known dining cryptographers protocol, however it reveals the number of
votes rather than the final verdict. We define a notion of conditional
anonymity in order to analyse these two solutions. Both of them have been
checked in the model checker MCMAS
Popular and/or Prestigious? Measures of Scholarly Esteem
Citation analysis does not generally take the quality of citations into
account: all citations are weighted equally irrespective of source. However, a
scholar may be highly cited but not highly regarded: popularity and prestige
are not identical measures of esteem. In this study we define popularity as the
number of times an author is cited and prestige as the number of times an
author is cited by highly cited papers. Information Retrieval (IR) is the test
field. We compare the 40 leading researchers in terms of their popularity and
prestige over time. Some authors are ranked high on prestige but not on
popularity, while others are ranked high on popularity but not on prestige. We
also relate measures of popularity and prestige to date of Ph.D. award, number
of key publications, organizational affiliation, receipt of prizes/honors, and
gender.Comment: 26 pages, 5 figure
Modeling of the Labour Force Redistribution in Investment Projects with Account of their Delay
The mathematical model of the labour force redistribution in investment
projects is presented in the article. The redistribution mode of funds, labour
force in particular, according to the equal risk approach applied to the loss
of some assets due to delay in all the investment projects is provided in the
model. The sample of the developed model for three investment projects with the
specified labour force volumes and their defined unit costs at the particular
moment is given
- …