2,429 research outputs found

    Comparing the performance of baseball players : a multiple output approach

    Get PDF
    This article extends ideas from the economics literature on multiple output production and efficiency to develop methods for comparing baseball players that take into account the many dimensions to batting performance. A key part of this approach is the output aggregator. The weights in this output aggregator can be selected a priori (as is done with batting or slugging averages) or can be estimated statistically based on the performance of the best players in baseball. Once the output aggregator is obtained, an individual player can then be measured relative to the best, and a number between 0 and 1 characterizes his performance as a fraction of the best. The methods are applied to hitters using data from 1995-1999 on all regular players in baseball's major leagues

    Bayesian approaches to cointegratrion

    Get PDF
    The purpose of this paper is to survey and critically assess the Bayesian cointegration literature. In one sense, Bayesian analysis of cointegration is straightforward. The researcher can combine the likelihood function with a prior and do Bayesian inference with the resulting posterior. However, interesting and empirically important issues of global and local identification (and, as a result, prior elicitation) arise from the fact that the matrix of long run parameters is potentially of reduced rank. As we shall see, these identification problems can cause serious problems for Bayesian inference. For instance, a common noninformative prior can lead to a posterior distribution which is improper (i.e. is not a valid p.d.f. since it does not integrate to one) thus precluding valid statistical inference. This issue was brought forward by Kleibergen and Van Dijk (1994, 1998). The development of the Bayesian cointegration literature reflects an increasing awareness of these issues and this paper is organized to reflect this development. In particular, we begin by discussing early work, based on VAR or Vector Moving Average (VMA) representations which ignored these issues. We then proceed to a discussion of work based on the ECM representation, beginning with a simple specification using the linear normalization and normal priors before moving onto the recent literature which develops methods for sensible treatment of the identification issues

    Managerial decision making under uncertainty: the case of Twenty20 cricket

    Get PDF
    We consider managerial decision making by examining the impact of decisions taken by cricket captains on Twenty20 International (T20I) match outcomes. In particular, we examine whether pressure from external commentators is associated with suboptimal decision making by captains. Using data from over 300 T20I matches, we find little evidence that either winning the toss or choosing to bat first improves the likelihood of winning. Despite this, we find that captains in T20I cricket are significantly more likely to choose to bat rather than bowl after winning the toss, a finding that is consistent with social pressure constraining captains’ decision making

    The Role of Bile in the Regulation of Exocrine Pancreatic Secretion

    Get PDF
    As early as 1926 Mellanby (1) was able to show that introduction of bile into the duodenum of anesthetized cats produces a copious flow of pancreatic juice. In conscious dogs, Ivy & Lueth (2) reported, bile is only a weak stimulant of pancreatic secretion. Diversion of bile from the duodenum, however, did not influence pancreatic volume secretion stimulated by a meal (3,4). Moreover, Thomas & Crider (5) observed that bile not only failed to stimulate the secretion of pancreatic juice but also abolished the pancreatic response to intraduodenally administered peptone or soap

    Re-examining the consumption-wealth relationship : the role of model uncertainty

    Get PDF
    This paper discusses the consumption-wealth relationship. Following the recent influential workof Lettau and Ludvigson [e.g. Lettau and Ludvigson (2001), (2004)], we use data on consumption, assets andlabor income and a vector error correction framework. Key …ndings of their work are that consumption doesrespond to permanent changes in wealth in the expected manner, but that most changes in wealth are transitoryand have no e¤ect on consumption. We investigate the robustness of these results to model uncertainty andargue for the use of Bayesian model averaging. We …nd that there is model uncertainty with regards to thenumber of cointegrating vectors, the form of deterministic components, lag length and whether the cointegratingresiduals a¤ect consumption and income directly. Whether this uncertainty has important empirical implicationsdepends on the researcher's attitude towards the economic theory used by Lettau and Ludvigson. If we workwith their model, our findings are very similar to theirs. However, if we work with a broader set of models andlet the data speak, we obtain somewhat di¤erent results. In the latter case, we …nd that the exact magnitudeof the role of permanent shocks is hard to estimate precisely. Thus, although some support exists for the viewthat their role is small, we cannot rule out the possibility that they have a substantive role to play

    Співставлення чіткого та нечіткого підходів до розв’язку задач інформаційної безпеки

    Get PDF
    Проведено порівняння чіткого та нечіткого підходів із метою виявлення їх спільних рис та відмінностей. У задачі розподілу ресурсів захисту інформації проаналізовано принципи формування функцій належності до нечітких множин і їх вплив на кінцеві результати. Показано, що нечіткий підхід дає змогу оптимізувати показники системи захисту інформації за рахунок раціонального вибору функцій належності, які відображають основну характеристику об’єктів — їх динамічну вразливість. На прикладі системи з двох об’єктів із різними вразливостями встановлені умови, за яких досягається найвищий рівень співпадіння результатів у разі використання двох підходів. Методика може бути використана під час розрахунку допустимих витрат в інформаційних системах із довільною кількістю об’єктів, котрі відрізняються кількістю розміщеної інформації, вразливістю та рівнем допустимих втрат. Окреслені шляхи подальшого застосування приведеної методики в задачах інформаційної безпеки.Проведено сравнение четкого и нечеткого подходов с целью выявления их общих черт и различий. В задаче распределения ресурсов защиты информации проанализированы принципы формирования функций принадлежности к нечетким множествам и их влияние на конечные результаты. Показано, что нечеткий подход дает возможность оптимизировать показатели системы защиты информации за счет рационального выбора функций принадлежности, которые отражают основную характеристику объектов — их динамическую уязвимость. На примере системы из двух объектов с различными уязвимостями установлены условия, при которых достигается наивысший уровень совпадения результатов при использовании двух подходов. Методика может быть использована при расчете допустимых затрат в информационных системах с произвольным количеством объектов, которые отличаются объемом размещенной информации, уязвимостью и уровнем допустимых потерь. Обозначены пути дальнейшего применения приведенной методики в задачах информационной безопасности.A comparison of explicit and fuzzy approaches to identify their similarities and differences is carried out. In the problem of distribution of resources of information protection the principles of the formation of the membership functions to the fuzzy sets and their effect on the final results are analyzed. It is shown that the fuzzy approach gives the possibility to optimize the indicators of the system of information security through a rational choice of the membership functions, which reflect the basic characteristic of the objects - their dynamic vulnerability. Through the example of the system of two objects with different vulnerabilities the conditions under which the highest level of results coincidence is achieved using two approaches are established. The technique can be used when calculating the eligible costs of information systems with an arbitrary number of objects that have different volume of the placed information, vulnerability and level of acceptable losses. The ways of further application of the method in problems of information security are identified
    corecore