6,992 research outputs found

    Unambiguous events and dynamic Choquet preferences.

    Get PDF
    This paper explores the relationship between dynamic consistency and existing notions of unambiguous events for Choquet expected utility preferences. A decision maker is faced with an information structure represented by a ïŹltration. We show that the decision maker’s preferences respect dynamic consistency on a ïŹxed ïŹltration if and only if the last stage of the ïŹltration is composed of unambiguous events in the sense of Nehring (Math Social Sci 38:197–213, 1999). Adopting two axioms, conditional certainty equivalence consistency and constrained dynamic consistency to ïŹltration measurable acts, it is shown that the decision maker respects these two axioms on a ïŹxed ïŹltration if and only if the last stage of the ïŹltration is made up of unambiguous events in the sense of Zhang (Econ Theory 20:159–181, 2002).Choquet expected utility; Unambiguous events; Filtration; Updating; Dynamic consistency; Consequentialism;

    Predicting the Impact of Measures Against P2P Networks on the Transient Behaviors

    Get PDF
    The paper has two objectives. The first is to study rigorously the transient behavior of some P2P networks whenever information is replicated and disseminated according to epidemic-like dynamics. The second is to use the insight gained from the previous analysis in order to predict how efficient are measures taken against peer-to-peer (P2P) networks. We first introduce a stochastic model which extends a classical epidemic model and characterize the P2P swarm behavior in presence of free riding peers. We then study a second model in which a peer initiates a contact with another peer chosen randomly. In both cases the network is shown to exhibit a phase transition: a small change in the parameters causes a large change in the behavior of the network. We show, in particular, how the phase transition affects measures that content provider networks may take against P2P networks that distribute non-authorized music or books, and what is the efficiency of counter-measures.Comment: IEEE Infocom (2011

    BLIMPK Simulations of Hypervelocity Boundary Layers - Boundary Layer Integral Matrix Procedure with Kinetics

    Get PDF
    When designing reentry spacecraft, whether for Earth reentry or for other planetary atmospheres, it is important to obtain an accurate measure of surface heating rates throughout the pertinent reentry trajectory. Heat transfer, on internal and external surfaces, largely dictates the shielding method and material to be used. For this reason, simulating the flowfield around the vehicle is quite important, in particular when it comes to simulating the viscous flowfield at the boundary layer level, close to the surface. At reentry speeds on the order of several kilometers per second, the flow is known to dissociate inside the hot boundary layer and gas-phase as well as surface reactions including ablation must be accounted for. During the design of the space shuttle, for example, in order to the select the best Thermal Protection System (TPS) possible, studies had to be carried out to look at reaction rate and surface catalycity effects on the heat transfer rate. Furthermore, this had to be done quickly for several possible vehicle configurations. Therefore, the need for a fast but flexible boundary layer code led to the development of BLIMP, a Boundary Layer Integral Matrix Procedure satisfying these requirements. At the time, fast meant that a solution had to be obtained with a minimal number of grid points. Flexible meant it had to be easy to try different chemical models, i.e. different species compositions, ablation models and boundary layer reactions. Continuously updated since, the latest version of BLIMP was renamed BLIMPK because of the addition of kinetics as an option. The code is therefore capable of simulating multicomponent boundary layers with frozen, equilibrium or nonequilibrium chemistry. Unequal concentration and thermal diffusion are other options and laminar flows as well as turbulent flows (with built-in eddy viscosity models) can be computed. Gas phase reactions and surface reactions are parameters and a maximum of 15 transverse nodal points is all that is needed to capture the profile of the boundary layer. The number of possible streamwise nodes is unlimited. A detailed description of the main program and the numerical scheme is available in Bartlett and Kendall [1967] for the first version of BLIMP including solely equilibrium chemistry. The nonequilibrium chemistry extension including surface reactions is discussed in Tong et.al. [1973] and the various turbulence models are compared in Evans [1975]. A brief input guide for the latest version of BLIMPK, called BLIMP88, can be found in Murray [1988]. The present report will only attempt to summarize the important ideas contained in the original reports relevant to understanding and running the code. Examples of T5 generated flows computed with BLIMP88 are also included. In particular, different simulations of an axisymmctric hypervelocity boundary layer on a sharp cone will be considered

    A GPU-based multi-criteria optimization algorithm for HDR brachytherapy

    Full text link
    Currently in HDR brachytherapy planning, a manual fine-tuning of an objective function is necessary to obtain case-specific valid plans. This study intends to facilitate this process by proposing a patient-specific inverse planning algorithm for HDR prostate brachytherapy: GPU-based multi-criteria optimization (gMCO). Two GPU-based optimization engines including simulated annealing (gSA) and a quasi-Newton optimizer (gL-BFGS) were implemented to compute multiple plans in parallel. After evaluating the equivalence and the computation performance of these two optimization engines, one preferred optimization engine was selected for the gMCO algorithm. Five hundred sixty-two previously treated prostate HDR cases were divided into validation set (100) and test set (462). In the validation set, the number of Pareto optimal plans to achieve the best plan quality was determined for the gMCO algorithm. In the test set, gMCO plans were compared with the physician-approved clinical plans. Over 462 cases, the number of clinically valid plans was 428 (92.6%) for clinical plans and 461 (99.8%) for gMCO plans. The number of valid plans with target V100 coverage greater than 95% was 288 (62.3%) for clinical plans and 414 (89.6%) for gMCO plans. The mean planning time was 9.4 s for the gMCO algorithm to generate 1000 Pareto optimal plans. In conclusion, gL-BFGS is able to compute thousands of SA equivalent treatment plans within a short time frame. Powered by gL-BFGS, an ultra-fast and robust multi-criteria optimization algorithm was implemented for HDR prostate brachytherapy. A large-scale comparison against physician approved clinical plans showed that treatment plan quality could be improved and planning time could be significantly reduced with the proposed gMCO algorithm.Comment: 18 pages, 7 figure

    Unambiguous Events and Dynamic Choquet Preferences

    Get PDF
    This paper explores the relationship between dynamic consistency and the existing notions of unambiguous events for Choquet expected utility preferences. A decision maker is faced with an information structure represented by a filtration. We show that the decision maker's preferences respect dynamic consistency on a fixed filtration if and only if the last stage of the filtration is composed of unambiguous events in the sense of Nehring (1999). Adopting two axioms, conditional certainty equivalence consistency and constrained dynamic consistency to filtration measurable acts, it is shown that the decision maker respects these two axioms on a fixed filtration if and only if the last stage of the filtration is made up of unambiguous events the sense of Zhang (2002)

    Predicting the relevance of distributional semantic similarity with contextual information

    Get PDF
    International audienceUsing distributional analysis methods to compute semantic proximity links between words has become commonplace in NLP. The resulting relations are often noisy or difficult to interpret in general. This paper focuses on the issues of evaluating a distributional resource and filtering the relations it contains, but instead of considering it in abstracto, we focus on pairs of words in context. In a discourse , we are interested in knowing if the semantic link between two items is a by-product of textual coherence or is irrelevant. We first set up a human annotation of semantic links with or without contex-tual information to show the importance of the textual context in evaluating the relevance of semantic similarity, and to assess the prevalence of actual semantic relations between word tokens. We then built an experiment to automatically predict this relevance , evaluated on the reliable reference data set which was the outcome of the first annotation. We show that in-document information greatly improve the prediction made by the similarity level alone

    Évaluer et amĂ©liorer une ressource distributionnelle : protocole d'annotation de liens sĂ©mantiques en contexte

    Get PDF
    National audienceL’application de mĂ©thodes d’analyse distributionnelle pour calculer des liens de proximitĂ© sĂ©mantique entre les mots est devenue courante en TAL. Toutefois, il reste encore beaucoup Ă  faire pour mieux comprendre la nature de la proximitĂ© sĂ©mantique qui est calculĂ©e par ces mĂ©thodes. Cet article est consacrĂ© Ă  la question de l’évaluation d’une ressource distributionnelle, et de son amĂ©lioration ; en effet, nous envisageons la mise en place d’une procĂ©dure d’évaluation comme une premiĂšre Ă©tape vers la caractĂ©risation de la ressource et vers son ajustement, c’est-Ă -dire la rĂ©duction du bruit en faveur de paires de voisins distributionnels exhibant une relation sĂ©mantique pertinente. Nous proposons un protocole d’annotation en contexte des voisins distributionnels, qui nous permet de constituer un ensemble ïŹable de donnĂ©es de rĂ©fĂ©rence (couples de voisins jugĂ©s pertinents ou non par les annotateurs). Les donnĂ©es produites sont analysĂ©es, puis exploitĂ©es pour entraĂźner un systĂšme de catĂ©gorisation automatique des liens de voisinage distributionnel, qui prend en compte une large gamme d’indices et permet un ïŹltrage efïŹcace de la ressource considĂ©rĂ©e
    • 

    corecore