874 research outputs found

    Mechanism design for eliciting probabilistic estimates from multiple suppliers with unknown costs and limited precision

    No full text
    This paper reports on the design of a novel two-stage mechanism, based on strictly proper scoring rules, that allows a centre to acquire a costly probabilistic estimate of some unknown parameter, by eliciting and fusing estimates from multiple suppliers. Each of these suppliers is capable of producing a probabilistic estimate of any precision, up to a privately known maximum, and by fusing several low precision estimates together the centre is able to obtain a single estimate with a specified minimum precision. Specifically, in the mechanism's first stage M from N agents are pre-selected by eliciting their privately known costs. In the second stage, these M agents are sequentially approached in a random order and their private maximum precision is elicited. A payment rule, based on a strictly proper scoring rule, then incentivises them to make and truthfully report an estimate of this maximum precision, which the centre fuses with others until it achieves its specified precision. We formally prove that the mechanism is incentive compatible regarding the costs, maximum precisions and estimates, and that it is individually rational. We present empirical results showing that our mechanism describes a family of possible ways to perform the pre-selection in the first stage, and formally prove that there is one that dominates all others

    Mechanism design for eliciting probabilistic estimates from multiple suppliers with unknown costs and limited precision

    No full text
    This paper reports on the design of a novel two-stage mechanism, based on strictly proper scoring rules, that allows a centre to acquire a costly probabilistic estimate of some unknown parameter, by eliciting and fusing estimates from multiple suppliers. Each of these suppliers is capable of producing a probabilistic estimate of any precision, up to a privately known maximum, and by fusing several low precision estimates together the centre is able to obtain a single estimate with a specified minimum precision. Specifically, in the mechanism's first stage M from N agents are pre-selected by eliciting their privately known costs. In the second stage, these M agents are sequentially approached in a random order and their private maximum precision is elicited. A payment rule, based on a strictly proper scoring rule, then incentivises them to make and truthfully report an estimate of this maximum precision, which the centre fuses with others until it achieves its specified precision. We formally prove that the mechanism is incentive compatible regarding the costs, maximum precisions and estimates, and that it is individually rational. We present empirical results showing that our mechanism describes a family of possible ways to perform the pre-selection in the first stage, and formally prove that there is one that dominates all others

    Mechanism Design for eliciting probabilistic estimates from multiple suppliers with unknown costs and limited precision

    Get PDF
    This paper reports on the design of a novel two-stage mechanism, based on strictly proper scoring rules, that allows a centre to acquire a costly probabilistic estimate of some unknown parameter, by eliciting and fusing estimates from multiple suppliers. Each of these suppliers is capable of producing a probabilistic estimate of any precision, up to a privately known maximum, and by fusing several low precision estimates together the centre is able to obtain a single estimate with a speciïŹed minimum precision. SpeciïŹcally, in the mechanism’s ïŹrst stage M from N agents are pre-selected by eliciting their privately known costs. In the second stage, these M agents are sequentially approached in a random order and their private maximum precision is elicited. A payment rule, based on a strictly proper scoring rule, then incentivises them to make and truthfully report an estimate of this maximum precision, which the centre fuses with others until it achieves its speciïŹed precision. We formally prove that the mechanism is incentive compatible regarding the costs, maximum precisions and estimates, and that it is individually rational. We present empirical results showing that our mechanism describes a family of possible ways to perform the pre-selection in the ïŹrst stage, and formally prove that there is one that dominates all others

    Mechanism design for information aggregation within the smart grid

    No full text
    The introduction of a smart electricity grid enables a greater amount of information exchange between consumers and their suppliers. This can be exploited by novel aggregation services to save money by more optimally purchasing electricity for those consumers. Now, if the aggregation service pays consumers for said information, then both parties could benefit. However, any such payment mechanism must be carefully designed to encourage the customers (say, home-owners) to exert effort in gathering information and then to truthfully report it to the aggregator. This work develops a model of the information aggregation problem where each home has an autonomous home agent, which acts on its behalf to gather information and report it to the aggregation agent. The aggregator has its own historical consumption information for each house under its service, so it can make an imprecise estimate of the future aggregate consumption of the houses for which it is responsible. However, it uses the information sent by the home agents in order to make a more precise estimate and, in return, gives each home agent a reward whose amount is determined by the payment mechanism in use by the aggregator. There are three desirable properties of a mechanism that this work considers: budget balance (the aggregator does not reward the agents more than it saves), incentive compatibility (agents are encouraged to report truthfully), and finally individual rationality (the payments to the home agents must outweigh their incurred costs). In this thesis, mechanism design is used to develop and analyse two mechanisms. The first, named the uniform mechanism, divides the savings made by the aggregator equally among the houses. This is both Nash incentive compatible, strongly budget balanced and individually rational. However, the agents' rewards are not fair insofar as each agent is rewarded equally irrespective of that agent's actual contribution to the savings. This results in a smaller incentive for agents to produce precise reports. Moreover, it encourages undesirable behaviour from agents who are able to make the loads placed upon the grid more volatile such that they are harder to predict. To resolve these issues, a novel scoring rule-based mechanism named sum of others' plus max is developed, which uses the spherical scoring rule to more fairly distribute rewards to agents based on the accuracy and precision of their individual reports. This mechanism is weakly budget balanced, dominant strategy incentive compatible and individually rational. Moreover, it encourages agents to make their loads less volatile, such that they are more predictable. This has obvious advantages to the electricity grid. For example, the amount of spinning reserve generation can be reduced, reducing the carbon output of the grid and the cost per unit of electricity. This work makes use of both theoretical and empirical analysis in order to evaluate the aforementioned mechanisms. Theoretical analysis is used in order to prove budget balance, individual rationality and incentive compatibility. However, theoretical evaluation of the equilibrium strategies within each of the mechanisms quickly becomes intractable. Consequently, empirical evaluation is used to further analyse the properties of the mechanisms. This analysis is first performed in an environment in which agents are able to manipulate their reports. However, further analysis is provided which shows the behaviour of the agents when they are able to make themselves harder to predict. Such a scenario has thus far not been discussed within mechanism design literature. Empirical analysis shows the sum of others' plus max mechanism to provide greater incentives for agents to make precise predictions. Furthermore, as a result of this, the aggregator increases its utility through implementing the sum of others' plus max mechanism over the uniform mechanism and over implementing no mechanism. Moreover, in settings which allow agents to manipulate the volatility of their loads, it is shown that the uniform mechanism causes the aggregator to lose utility in comparison to using no mechanism, whereas in comparison to no mechanism, the sum of others' plus max mechanism causes an increase in utility to the aggregator

    A Mechanism Design Approach to Bandwidth Allocation in Tactical Data Networks

    Get PDF
    The defense sector is undergoing a phase of rapid technological advancement, in the pursuit of its goal of information superiority. This goal depends on a large network of complex interconnected systems - sensors, weapons, soldiers - linked through a maze of heterogeneous networks. The sheer scale and size of these networks prompt behaviors that go beyond conglomerations of systems or `system-of-systems\u27. The lack of a central locus and disjointed, competing interests among large clusters of systems makes this characteristic of an Ultra Large Scale (ULS) system. These traits of ULS systems challenge and undermine the fundamental assumptions of today\u27s software and system engineering approaches. In the absence of a centralized controller it is likely that system users may behave opportunistically to meet their local mission requirements, rather than the objectives of the system as a whole. In these settings, methods and tools based on economics and game theory (like Mechanism Design) are likely to play an important role in achieving globally optimal behavior, when the participants behave selfishly. Against this background, this thesis explores the potential of using computational mechanisms to govern the behavior of ultra-large-scale systems and achieve an optimal allocation of constrained computational resources Our research focusses on improving the quality and accuracy of the common operating picture through the efficient allocation of bandwidth in tactical data networks among self-interested actors, who may resort to strategic behavior dictated by self-interest. This research problem presents the kind of challenges we anticipate when we have to deal with ULS systems and, by addressing this problem, we hope to develop a methodology which will be applicable for ULS system of the future. We build upon the previous works which investigate the application of auction-based mechanism design to dynamic, performance-critical and resource-constrained systems of interest to the defense community. In this thesis, we consider a scenario where a number of military platforms have been tasked with the goal of detecting and tracking targets. The sensors onboard a military platform have a partial and inaccurate view of the operating picture and need to make use of data transmitted from neighboring sensors in order to improve the accuracy of their own measurements. The communication takes place over tactical data networks with scarce bandwidth. The problem is compounded by the possibility that the local goals of military platforms might not be aligned with the global system goal. Such a scenario might occur in multi-flag, multi-platform military exercises, where the military commanders of each platform are more concerned with the well-being of their own platform over others. Therefore there is a need to design a mechanism that efficiently allocates the flow of data within the network to ensure that the resulting global performance maximizes the information gain of the entire system, despite the self-interested actions of the individual actors. We propose a two-stage mechanism based on modified strictly-proper scoring rules, with unknown costs, whereby multiple sensor platforms can provide estimates of limited precisions and the center does not have to rely on knowledge of the actual outcome when calculating payments. In particular, our work emphasizes the importance of applying robust optimization techniques to deal with the uncertainty in the operating environment. We apply our robust optimization - based scoring rules algorithm to an agent-based model framework of the combat tactical data network, and analyze the results obtained. Through the work we hope to demonstrate how mechanism design, perched at the intersection of game theory and microeconomics, is aptly suited to address one set of challenges of the ULS system paradigm - challenges not amenable to traditional system engineering approaches

    A framework for managing global risk factors affecting construction cost performance

    Get PDF
    Poor cost performance of construction projects has been a major concern for both contractors and clients. The effective management of risk is thus critical to the success of any construction project and the importance of risk management has grown as projects have become more complex and competition has increased. Contractors have traditionally used financial mark-ups to cover the risk associated with construction projects but as competition increases and margins have become tighter they can no longer rely on this strategy and must improve their ability to manage risk. Furthermore, the construction industry has witnessed significant changes particularly in procurement methods with clients allocating greater risks to contractors. Evidence shows that there is a gap between existing risk management techniques and tools, mainly built on normative statistical decision theory, and their practical application by construction contractors. The main reason behind the lack of use is that risk decision making within construction organisations is heavily based upon experience, intuition and judgement and not on mathematical models. This thesis presents a model for managing global risk factors affecting construction cost performance of construction projects. The model has been developed using behavioural decision approach, fuzzy logic technology, and Artificial Intelligence technology. The methodology adopted to conduct the research involved a thorough literature survey on risk management, informal and formal discussions with construction practitioners to assess the extent of the problem, a questionnaire survey to evaluate the importance of global risk factors and, finally, repertory grid interviews aimed at eliciting relevant knowledge. There are several approaches to categorising risks permeating construction projects. This research groups risks into three main categories, namely organisation-specific, global and Acts of God. It focuses on global risk factors because they are ill-defined, less understood by contractors and difficult to model, assess and manage although they have huge impact on cost performance. Generally, contractors, especially in developing countries, have insufficient experience and knowledge to manage them effectively. The research identified the following groups of global risk factors as having significant impact on cost performance: estimator related, project related, fraudulent practices related, competition related, construction related, economy related and political related factors. The model was tested for validity through a panel of validators (experts) and crosssectional cases studies, and the general conclusion was that it could provide valuable assistance in the management of global risk factors since it is effective, efficient, flexible and user-friendly. The findings stress the need to depart from traditional approaches and to explore new directions in order to equip contractors with effective risk management tools

    Food security, risk management and climate change

    Get PDF
    This report identifies major constraints to the adaptive capacity of food organisations operating in Australia. This report is about food security, climate change and risk management. Australia has enjoyed an unprecedented level of food security for more than half a century, but there are new uncertainties emerging and it would be unrealistic – if not complacent – to assume the same level of food security will persist simply because of recent history. The project collected data from more than 36 case study organisations (both foreign and local) operating in the Australian food-supply chain, and found that for many businesses,  risk management practices require substantial improvement to cope with and exploit the uncertainties that lie ahead. Three risks were identified as major constraints to adaptive capacity of food organisations operating in Australia:  risk management practices; an uncertain regulatory environment – itself a result of gaps in risk management; climate change uncertainty and projections about climate change impacts, also related to risk management

    Using Bayesian belief networks for reliability management : construction and evaluation: a step by step approach

    Get PDF
    In the capital goods industry, there is a growing need to manage reliability throughout the product development process. A number of trends can be identified that have a strong effect on the way in which reliability prediction and management is approached, i.e.: - The lifecycle costs approach that is becoming increasingly important for original equipment manufacturers - The increasing product complexity - The growth in customer demands - The pressure of shortening times to market - The increasing globalization of markets and production Reliability management is typically based on the insights, views, and perceptions of the real world of the people that are involved in the process of decision making. These views are unique and specific for each involved individual that looks at the management process and can be represented using soft systems methodology. Since soft systems methodology is based on insights, view and perceptions, it is especially suitable in the context of reliability prediction and management early in the product development process as studied in this thesis (where there is no objective data available (yet)). Two research objectives are identified through examining market trends and applying soft systems methodology. The first research objective focuses on the identification or development of a method for reliability prediction and management that meets the following criteria: - It should support decision making for reliability management - It should be able to also take non-technical factors into account - It has to be usable throughout the product development process and especially in the early phases of the process. - It should be able to capture and handle uncertainty This first research objective is addressed through a literature study of traditional approaches (failure mode and effects analysis, fault tree analysis and database methods), and more recent approaches to reliability prediction and reliability management (REMM, PREDICT and TRACS). The conclusion of the literature study is that traditional methods, although able to support decision making to some extent, take a technical point of view, and are usable only in a limited part of the product development process. The traditional methods are capable of taking uncertainty into account, but only uncertainty about the occurrence of single faults or failure modes. The recent approaches are able to meet the criteria to a greater extent: REMM is able to provide decision support, but mainly on a technical level, by prioritizing the elimination of design concerns. The reliability estimate provided by REMM can be updated over time and is clearly usable throughout the product development process. Uncertainty is incorporated in the reliability estimate as well as in the occurrence of concerns. PREDICT provides decision support for processes as well as components, but it focuses on the technical contribution of the component or process to reliability. As in REMM, PREDICT provides an updateable estimate, and incorporates uncertainty as a probability. TRACS uses Bayesian belief networks and provides decision support both in technical and non-technical terms. In the TRACS tool, estimates can be updated and uncertainty is incorporated using probabilities. Since TRACS is developed for one specific case, and an extensive discussion on the implementation process is missing, it is not readily applicable for reliability management in general. The discussion of literature leads to the choice of Bayesian belief networks as an effective modelling technique for reliability prediction and management. It also indicates that Bayesian belief networks are particularly well suited in the early stages of the product development process, because of their ability to make the influences of the product development process on reliability already explicit from the early stages of the product development process onwards. The second research objective is the development of a clear, systematic approach to build and use Bayesian belief networks in the context of reliability prediction and management. Although Bayesian belief network construction is widely described in the literature as having three generic steps (problem structuring, instantiation and inference), how the steps are to be made in practice is described only summarily. No systematic, coherent and structured approach for the construction of a Bayesian belief network can be found in literature. The second objective therefore concerns the identification and definition of model boundaries, model variables, and model structure. The methodology developed to meet this second objective is an adaptation of Grounded Theory, a method widely used in the social sciences. Grounded Theory is an inductive rather than deductive method (focusing on building rather than testing theory). Grounded Theory is adapted by adopting Bayesian network idioms (Neil, Fenton & Nielson, 2000) into the approach. Furthermore, the canons of the Grounded Theory methodology (Corbin & Strauss, 1990) were not strictly followed because of their limited suitability for the subject, and for practical reasons. Grounded Theory has been adapted as a methodology for structuring problems modelled with Bayesian belief networks. The adapted Grounded Theory approach is applied in a case study in a business unit of a company that develops and produces medical scanning equipment. Once the Bayesian belief net model variables, structure and boundaries have been determined the network must be instantiated. For instantiation, a probability elicitation protocol has been developed. This protocol includes a training, preparation for the elicitation, a direct elicitation process, and feedback on the elicitation. The instantiation is illustrated as part of the case study. The combination of the adapted Grounded Theory method for problem structuring, and the probability elicitation protocol for instantiation together form an algorithm for Bayesian belief network construction (consisting of data gathering, problem structuring, instantiation, and feedback) that consists of the following 9 steps (see Table 1). Table 1: Bayesian belief network construction algorithm 1. Gather information regarding the way in which the topic under discussion is influenced by conducting interviews 2. Identify the factors (i.e. nodes) that influence the topic, by analyzing and coding the interviews 3. Define the variables by identifying the different possible states (state-space) of the variables through coding and direct conversation with experts 4. Characterize the relationships between the different nodes using the idioms through analysis and coding of the interviews 5. Control the number of conditional probabilities that has to be elicited using the definitional/synthesis idiom (Neil, Fenton & Nielson, 2000) 6. Evaluate the Bayesian belief network, possibly leading to a repetition of (a number of) the first 5 steps 7. Identify and define the conditional probability tables that define the relationships in the Bayesian belief network 8. Fill in the conditional probability tables, in order to define the relationships in the Bayesian belief network 9. Evaluate the Bayesian belief network, possibly leading to a repetition of (a number of) earlier steps A Bayesian belief network for reliability prediction and management was constructed using the algorithm. The model’s problem structure and the model behaviour are validated during and at the end of the construction process. A survey was used to validate the problem structure and the model behaviour was validated through a focus group meeting. Unfortunately, the results of the survey were limited, because of the low response rate (35%). The results of the focus group meeting indicated that the model behaviour was realistic, implying that application of the adapted Grounded Theory approach results in a realistic model for reliability management. The adapted Grounded Theory approach developed in this thesis provides a scientific and practical contribution to model building and use in the face of limited availability of information. The scientific contribution lies in the provision of the systematic and coherent approach to Bayesian belief network construction described above. The practical contribution lies in the application of this approach in the context of reliability prediction and management and in the structured and algorithmic approach to model building. The case study in this thesis shows the construction and use of an effective model that enables reliability prediction, and provides decision support for reliability management throughout the product development process from the earliest stages of the process. Bayesian belief networks provide a strong basis for reliability management, giving qualitative and quantitative insights in relationships between influential variables and reliabilit

    Improving water asset management when data are sparse

    Get PDF
    Ensuring the high of assets in water utilities is critically important and requires continuous improvement. This is due to the need to minimise risk of harm to human health and the environment from contaminated drinking water. Continuous improvement and innovation in water asset management are therefore, necessary and are driven by (i) increased regulatory requirements on serviceability; (ii) high maintenance costs, (iii) higher customer expectations, and (iv) enhanced environmental and health/safety requirements. High quality data on asset failures, maintenance, and operations are key requirements for developing reliability models. However, a literature search revealed that, in practice, there is sometimes limited data in water utilities - particularly for over-ground assets. Perhaps surprisingly, there is often a mismatch between the ambitions of sophisticated reliability tools and the availability of asset data water utilities are able to draw upon to implement them in practice. This research provides models to support decision-making in water utility asset management when there is limited data. Three approaches for assessing asset condition, maintenance effectiveness and selecting maintenance regimes for specific asset groups were developed. Expert elicitation was used to test and apply the developed decision-support tools. A major regional water utility in England was used as a case study to investigate and test the developed approaches. The new approach achieved improved precision in asset condition assessment (Figure 3–3a) - supporting the requirements of the UK Capital Maintenance Planning Common Framework. Critically, the thesis demonstrated that, on occasion, assets were sometimes misallocated by more than 50% between condition grades when using current approaches. Expert opinions were also sought for assessing maintenance effectiveness, and a new approach was tested with over-ground assets. The new approach’s value was demonstrated by the capability to account for finer measurements (as low as 10%) of maintenance effectiveness (Table 4-4). An asset maintenance regime selection approach was developed to support decision-making when data are sparse. The value of the approach is its versatility in selecting different regimes for different asset groups, and specifically accounting for the assets unique performance variables

    Dynamics of deception between strangers

    Get PDF
    • 

    corecore