36,553 research outputs found

    Friedman, Harsanyi, Rawls, Boulding - or Somebody Else?

    Get PDF
    This paper investigates distributive justice using a fourfold experimental design : The ignorance and the risk scenarios are combined with the self-concern and the umpire modes. We study behavioral switches between self-concern and umpire mode and investigate the goodness of ten standards of behavior. In the ignorance scenario, subjects became on average less inequality averse as umpires. A within-subjects analysis shows that about one half became less inequality averse, one quarter became more inequality averse and one quarter left its behavior unchanged as umpires. In the risk scenario, subjects become on average more inequality averse in their umpire roles. A within-subjects analysis shows that half of them became more inequality averse, one quarter became less inequality averse, and one quarter left its behavior unchanged as umpires. As to the standards of behavior, several prominent ones (leximin, leximax, Gini, Cobb-Douglas) experienced but poor support, while expected utility, Boulding's hypothesis, the entropy social welfare function, and randomization preference enjoyed impressive acceptance. For the risk scenario, the tax standard of behavior joins the favorite standards of behavior. --Distributive justice,income distributions,veil of ignorance

    Disaggregated Approaches to Freight Analysis: A Feasibility Study.

    Get PDF
    Forecasting the demand for freight transport is notoriously difficult. Although ever more advanced modelling techniques are becoming available, there is little data available for calibration. Compared to passenger travel, there are many fewer decision makers in freight, especially for the main bulk commodities, so the decisions of a relatively small number of principal players greatly influence the outcome. Moreover, freight comes in various shapes, sizes and physical states, which require different handling methods and suit the various modes (and sub-modes) of transport differently. In the face of these difficulties, present DTp practice is to forecast Britain's freight traffic using a very simple aggregate approach which assumes that tonne kilometres will rise in proportion to GDP. Although this simple model fits historical data quite well, there is a clear danger that this relationship will not hold good in the future. The relationship between tonne kilometres and GDP depends on the mix of products produced, their value to weight ratios, number of times lifted and lengths of haul. In the past, a declining ratio of tonnes to GDP has been offset by increasing lengths of haul. This has come about through a complicated set of changes in product mix, industrial structure and distribution systems. A more disaggregate approach which studies changes in all these factors by industrial sector seems likely to provide a better understanding of the relationship between tonne kilometres and GDP. However, there are also problems with disaggregation. As we disaggregate we get more understanding of what might change in the future, but are less able to project trends forward. This can be seen if we consider the future amounts of coal movements. Theoretically there is clearly scope for better forecasting by allowing for past trends to be overturned by a movement towards gas powered electricity generation and more imports of coal direct to coastal power stations. However, making such a sectoral forecast is extremely difficult, and inaccuracy here may more than offset the theoretical gain referred to earlier. This is because it is usually easier to forecast to a given percentage accuracy an aggregate rather than its components. For example, the percentage error on sales forecasts of Hotpoint washing machines will be greater than that for the sales of all washing machines taken together. This occurs because different makes of washing machines are substitutes for each other, so forecasts for Hotpoint washing machines must take into account uncertainty over Hotpoint's market share as well as uncertainty over the future total sales of washing machines. Nevertheless, a disaggregate investigation of the market could spot trends which were `buried' in the aggregate figures. For example, rapidly declining sales for one manufacturer might indicate their leaving the market, which with less competition would then price up and so reduce the total future sales. We have assumed above that the use of the term disaggregate in the brief refers to disaggregation by industrial sector. An alternative usage of the word disaggregate in this context is when referring to modelling at the level of the individual decision making unit. Disaggregate freight modelling in this sense would involve analysing decisions in order to determine the utility weight attached to different attributes of available transport options. Because data on suitable decisions is not readily available in this country, due to commercial confidentiality, we have recently undertaken research in which we have presented decision makers with hypothetical choices, and obtained the necessary utility weights from their responses. Whilst initial scepticism is understandable, this method has produced results acceptable for use in major projects. ITS itself has provided algorithms (known as Leeds Adaptive Stated Preference) which have been used to derive utility weights for use by British Rail in forecasting cross-channel freight, by DTp in evaluating the reaction of commercial vehicles to toll roads, and by the Dutch Ministry of Transport in modelling freight in the Netherlands. In the light of the above, the following objectives were set for the feasibility study: (1)To determine if a forecasting approach disaggregated by industrial sectors, as under the first definition above, can be used to explain recent trends in freight transport; (2)To test the feasibility of the disaggregated approach for improving the understanding of likely future developments in freight markets, this being informed by current best understanding of the disaggregate decision-making process as under the second definition above

    Is prevention better than cure? An empirical investigation for the case of avian influenza

    Get PDF
    The new EU Animal Health Strategy suggests a shift in emphasis away from control towards prevention and surveillance activities for the management of threats to animal health. The optimal combination of these actions will differ among diseases and depend on largely unknown and uncertain costs and benefits. This paper reports an empirical investigation of this issue for the case of Avian Influenza. The results suggest that the optimal combination of actions will be dependent on the objective of the decision maker and that conflict exists between an optimal strategy which minimises costs to the government and one which maximises producer profits or minimises negative effects on human health. From the perspective of minimising the effects on human health, prevention appears preferable to cure but the case is less clear for other objectives

    The Vadalog System: Datalog-based Reasoning for Knowledge Graphs

    Full text link
    Over the past years, there has been a resurgence of Datalog-based systems in the database community as well as in industry. In this context, it has been recognized that to handle the complex knowl\-edge-based scenarios encountered today, such as reasoning over large knowledge graphs, Datalog has to be extended with features such as existential quantification. Yet, Datalog-based reasoning in the presence of existential quantification is in general undecidable. Many efforts have been made to define decidable fragments. Warded Datalog+/- is a very promising one, as it captures PTIME complexity while allowing ontological reasoning. Yet so far, no implementation of Warded Datalog+/- was available. In this paper we present the Vadalog system, a Datalog-based system for performing complex logic reasoning tasks, such as those required in advanced knowledge graphs. The Vadalog system is Oxford's contribution to the VADA research programme, a joint effort of the universities of Oxford, Manchester and Edinburgh and around 20 industrial partners. As the main contribution of this paper, we illustrate the first implementation of Warded Datalog+/-, a high-performance Datalog+/- system utilizing an aggressive termination control strategy. We also provide a comprehensive experimental evaluation.Comment: Extended version of VLDB paper <https://doi.org/10.14778/3213880.3213888

    National Multi-Modal Travel Forecasts. Literature Review: Aggregate Models

    Get PDF
    This paper reviews the current state-of-the-art in the production of National Multi-Modal Travel Forecasts. The review concentrates on the UK travel market and the various attempts to produce a set of accurate, coherent and credible forecasts. The paper starts by a brief introduction to the topic area. The second section gives a description of the background to the process and the problems involved in producing forecasts. Much of the material and terminology in the section, which covers modelling methodologies, is from Ortúzar and Willumsen (1994). The paper then goes on to review the forecasting methodology used by the Department of Transport (DoT) to produce the periodic National Road Traffic Forecasts (NRTF), which are the most significant set of travel forecasts in the UK. A brief explanation of the methodology will be given. The next section contains details of how other individuals and organisations have used, commented on or attempted to enhance the DoT methodology and forecasts. It will be noted that the DoT forecasts are only concerned with road traffic forecasts, with other modes (rail, air and sea) only impacting on these forecasts when there is a transfer to or from the road transport sector. So the following sections explore the attempts to produce explicit travel and transportation forecasts for these other modes. The final section gathers together a set of issues which are raised by this review and might be considered by the project

    National Multi-Modal Travel Forecasts. Literature Review: Aggregate Models

    Get PDF
    This paper reviews the current state-of-the-art in the production of National Multi-Modal Travel Forecasts. The review concentrates on the UK travel market and the various attempts to produce a set of accurate, coherent and credible forecasts. The paper starts by a brief introduction to the topic area. The second section gives a description of the background to the process and the problems involved in producing forecasts. Much of the material and terminology in the section, which covers modelling methodologies, is from Ortúzar and Willumsen (1994). The paper then goes on to review the forecasting methodology used by the Department of Transport (DoT) to produce the periodic National Road Traffic Forecasts (NRTF), which are the most significant set of travel forecasts in the UK. A brief explanation of the methodology will be given. The next section contains details of how other individuals and organisations have used, commented on or attempted to enhance the DoT methodology and forecasts. It will be noted that the DoT forecasts are only concerned with road traffic forecasts, with other modes (rail, air and sea) only impacting on these forecasts when there is a transfer to or from the road transport sector. So the following sections explore the attempts to produce explicit travel and transportation forecasts for these other modes. The final section gathers together a set of issues which are raised by this review and might be considered by the project

    MobiCacher: Mobility-Aware Content Caching in Small-Cell Networks

    Full text link
    Small-cell networks have been proposed to meet the demand of ever growing mobile data traffic. One of the prominent challenges faced by small-cell networks is the lack of sufficient backhaul capacity to connect small-cell base stations (small-BSs) to the core network. We exploit the effective application layer semantics of both spatial and temporal locality to reduce the backhaul traffic. Specifically, small-BSs are equipped with storage facility to cache contents requested by users. As the {\em cache hit ratio} increases, most of the users' requests can be satisfied locally without incurring traffic over the backhaul. To make informed caching decisions, the mobility patterns of users must be carefully considered as users might frequently migrate from one small cell to another. We study the issue of mobility-aware content caching, which is formulated into an optimization problem with the objective to maximize the caching utility. As the problem is NP-complete, we develop a polynomial-time heuristic solution termed {\em MobiCacher} with bounded approximation ratio. We also conduct trace-based simulations to evaluate the performance of {\em MobiCacher}, which show that {\em MobiCacher} yields better caching utility than existing solutions.Comment: Accepted by Globecom 201
    • …
    corecore