64 research outputs found

    Utilitarian Collective Choice and Voting

    Get PDF
    In his seminal Social Choice and Individual Values, Kenneth Arrow stated that his theory applies to voting. Many voting theorists have been convinced that, on account of Arrow’s theorem, all voting methods must be seriously flawed. Arrow’s theory is strictly ordinal, the cardinal aggregation of preferences being explicitly rejected. In this paper I point out that all voting methods are cardinal and therefore outside the reach of Arrow’s result. Parallel to Arrow’s ordinal approach, there evolved a consistent cardinal theory of collective choice. This theory, most prominently associated with the work of Harsanyi, continued the older utilitarian tradition in a more formal style. The purpose of this paper is to show that various derivations of utilitarian SWFs can also be used to derive utilitarian voting (UV). By this I mean a voting rule that allows the voter to score each alternative in accordance with a given scale. UV-k indicates a scale with k distinct values. The general theory leaves k to be determined on pragmatic grounds. A (1,0) scale gives approval voting. I prefer the scale (1,0,-1) and refer to the resulting voting rule as evaluative voting. A conclusion of the paper is that the defects of conventional voting methods result not from Arrow’s theorem, but rather from restrictions imposed on voters’ expression of their preferences. The analysis is extended to strategic voting, utilizing a novel set of assumptions regarding voter behavior

    Measurement in Economics and Social Science

    Get PDF
    The paper discusses measurement, primarily in economics, from both analytical and historical perspectives. The historical section traces the commitment to ordinalism on the part of economic theorists from the doctrinal disputes between classical economics and marginalism, through the struggle of orthodox economics against socialism down to the cold-war alliance between mathematical social science and anti-communist ideology. In economics the commitment to ordinalism led to the separation of theory from the quantitative measures that are computed in practice: price and quantity indexes, consumer surplus and real national product. The commitment to ordinality entered political science, via Arrow’s ‘impossibility theorem’, effectively merging it with economics, and ensuring its sterility. How can a field that has as its central result the impossibility of democracy contribute to the design of democratic institutions? The analytical part of the paper deals with the quantitative measures mentioned above. I begin with the conceptual clarification that what these measures try to achieve is a restoration of the money metric that is lost when prices are variable. I conclude that there is only one measure that can be embedded in a satisfactory economic theory, free from unreasonable restrictions. It is the Törnqvist index as an approximation to its theoretical counterpart the Divisia index. The statistical agencies have at various times produced different measures for real national product and its components, as well as related concepts. I argue that all of these are flawed and that a single deflator should be used for the aggregate and the components. Ideally this should be a chained Törnqvist price index defined on aggregate consumption. The social sciences are split. The economic approach is abstract, focused on the assumption of rational and informed behavior, and tends to the political right. The sociological approach is empirical, stresses the non-rational aspects of human behavior and tends to the political left. I argue that the split is due to the fact that the empirical and theoretical traditions were never joined in the social sciences as they were in the natural sciences. I also argue that measurement can potentially help in healing this split

    Science and Ideology in Economic, Political, and Social Thought

    Get PDF
    This paper has two sources: One is my own research in three broad areas: business cycles, economic measurement and social choice. In all of these fields I attempted to apply the basic precepts of the scientific method as it is understood in the natural sciences. I found that my effort at using natural science methods in economics was met with little understanding and often considerable hostility. I found economics to be driven less by common sense and empirical evidence, then by various ideologies that exhibited either a political or a methodological bias, or both. This brings me to the second source: Several books have appeared recently that describe in historical terms the ideological forces that have shaped either the direct areas in which I worked, or a broader background. These books taught me that the ideological forces in the social sciences are even stronger than I imagined on the basis of my own experiences. The scientific method is the antipode to ideology. I feel that the scientific work that I have done on specific, long standing and fundamental problems in economics and political science have given me additional insights into the destructive role of ideology beyond the history of thought orientation of the works I will be discussing

    Preselection of robust radiomic features does not improve outcome modelling in non-small cell lung cancer based on clinical routine FDG-PET imaging.

    Get PDF
    Radiomics is a promising tool for identifying imaging-based biomarkers. Radiomics-based models are often trained on single-institution datasets; however, multi-centre imaging datasets are preferred for external generalizability owing to the influence of inter-institutional scanning differences and acquisition settings. The study aim was to determine the value of preselection of robust radiomic features in routine clinical positron emission tomography (PET) images to predict clinical outcomes in locally advanced non-small cell lung cancer (NSCLC). A total of 1404 primary tumour radiomic features were extracted from pre-treatment [ <sup>18</sup> F]fluorodeoxyglucose (FDG)-PET scans of stage IIIA/N2 or IIIB NSCLC patients using a training cohort (n = 79; prospective Swiss multi-centre randomized phase III trial SAKK 16/00; 16 centres) and an internal validation cohort (n = 31; single centre). Robustness studies investigating delineation variation, attenuation correction and motion were performed (intraclass correlation coefficient threshold > 0.9). Two 12-/24-month event-free survival (EFS) and overall survival (OS) logistic regression models were trained using standardized imaging: (1) with robust features alone and (2) with all available features. Models were then validated using fivefold cross-validation, and validation on a separate single-centre dataset. Model performance was assessed using area under the receiver operating characteristic curve (AUC). Robustness studies identified 179 stable features (13%), with 25% stable features for 3D versus 4D acquisition, 31% for attenuation correction and 78% for delineation. Univariable analysis found no significant robust features predicting 12-/24-month EFS and 12-month OS (p value > 0.076). Prognostic models without robust preselection performed well for 12-month EFS in training (AUC = 0.73) and validation (AUC = 0.74). Patient stratification into two risk groups based on 12-month EFS was significant for training (p value = 0.02) and validation cohorts (p value = 0.03). A PET-based radiomics model using a standardized, multi-centre dataset to predict EFS in locally advanced NSCLC was successfully established and validated with good performance. Prediction models with robust feature preselection were unsuccessful, indicating the need for a standardized imaging protocol
    corecore