538 research outputs found

    Marx's Economy and Beyond

    Get PDF
    Following renewed interest in Marx?s political economy in the wake of the financial crisis, the paper aims to make a contribution to debates re-evaluating Marx?s theory of capitalism and his conceptualisation of the relationship between the polity and economy. We bring together our approaches from political philosophy, economic sociology, and political economy, making the case for a renewal of historical materialism. We develop a perspective of a radically politicised economy, rather than treating polity and economy as separate spheres. The political implications of this perspective involve an insistence on political and economic democratisation and human rights as an integral part of any socially egalitarian alternatives to capitalist political economies. The paper, written without the full academic paraphernalia, engages with current defences of the classical Marxist understanding of exploitation, especially the labour theory of value, and presents an alternative, neo-Polanyian analysi

    Isoelastic Agents and Wealth Updates in Machine Learning Markets

    Get PDF
    Recently, prediction markets have shown considerable promise for developing flexible mechanisms for machine learning. In this paper, agents with isoelastic utilities are considered. It is shown that the costs associated with homogeneous markets of agents with isoelastic utilities produce equilibrium prices corresponding to alpha-mixtures, with a particular form of mixing component relating to each agent's wealth. We also demonstrate that wealth accumulation for logarithmic and other isoelastic agents (through payoffs on prediction of training targets) can implement both Bayesian model updates and mixture weight updates by imposing different market payoff structures. An iterative algorithm is given for market equilibrium computation. We demonstrate that inhomogeneous markets of agents with isoelastic utilities outperform state of the art aggregate classifiers such as random forests, as well as single classifiers (neural networks, decision trees) on a number of machine learning benchmarks, and show that isoelastic combination methods are generally better than their logarithmic counterparts.Comment: Appears in Proceedings of the 29th International Conference on Machine Learning (ICML 2012

    Kantian Dignity and Marxian Socialism

    Get PDF
    This paper offers an account of human dignity based on a discussion of Kant's moral and political philosophy and then shows its relevance for articulating and developing in a fresh way some normative dimensions of Marx’s critique of capitalism as involving exploitation, domination, and alienation, and the view of socialism as involving a combination of freedom and solidarity. What is advanced here is not Kant’s own conception of dignity, but an account that partly builds on that conception and partly criticizes it. The same is the case with the account of socialism in relation to Marx’s work. As articulated, Kantian dignity and Marxian socialism turn out to be quite appealing and mutually supportive

    MedFuse: Multi-modal fusion with clinical time-series data and chest X-ray images

    Full text link
    Multi-modal fusion approaches aim to integrate information from different data sources. Unlike natural datasets, such as in audio-visual applications, where samples consist of "paired" modalities, data in healthcare is often collected asynchronously. Hence, requiring the presence of all modalities for a given sample is not realistic for clinical tasks and significantly limits the size of the dataset during training. In this paper, we propose MedFuse, a conceptually simple yet promising LSTM-based fusion module that can accommodate uni-modal as well as multi-modal input. We evaluate the fusion method and introduce new benchmark results for in-hospital mortality prediction and phenotype classification, using clinical time-series data in the MIMIC-IV dataset and corresponding chest X-ray images in MIMIC-CXR. Compared to more complex multi-modal fusion strategies, MedFuse provides a performance improvement by a large margin on the fully paired test set. It also remains robust across the partially paired test set containing samples with missing chest X-ray images. We release our code for reproducibility and to enable the evaluation of competing models in the future

    The Role of State Political Parties in Congressional Elections

    Get PDF
    Using an original dataset of state party bylaws, this dissertation examines the institutional role of state political parties in congressional primaries. Specifically, I consider how representative state political parties are of the general public, whether the varying levels of representation found in each state party influence who runs for Congress, and whether state political parties are able to influence levels of electoral competition through provisions of their bylaws. Overall, I find both state Democratic and Republican parties vary in the extent to which they prioritize gender representation and youth representation in their state central committees through their party rules. However, these rules only seem to influence the candidate emergence process during Democratic primaries. Specifically, in 2018, Democratic women were more likely to run for the House of Representatives when representing a state party chaired by a woman and when representing a state party which granted party committee membership to an allied women's group. Similarly, state Democratic parties were more likely to nominate younger candidates for the House of Representatives as the number of youth party members in their state central committee increased. Beyond candidate emergence, I find state party rules also influenced levels of electoral competition during the 2018 congressional primary elections, albeit differently for each party. State Democratic parties were less likely to see divisive primaries when they avoided policies that required the party to remain neutral during contested primaries. In comparison, state Republican parties were less likely to see divisive primaries, and also saw fewer primary candidates in general, when they guaranteed ex-officio party membership to their co-partisan elected officials

    Breast density classification with deep convolutional neural networks

    Full text link
    Breast density classification is an essential part of breast cancer screening. Although a lot of prior work considered this problem as a task for learning algorithms, to our knowledge, all of them used small and not clinically realistic data both for training and evaluation of their models. In this work, we explore the limits of this task with a data set coming from over 200,000 breast cancer screening exams. We use this data to train and evaluate a strong convolutional neural network classifier. In a reader study, we find that our model can perform this task comparably to a human expert

    Exploiting diversity for efficient machine learning

    Get PDF
    A common practice for solving machine learning problems is currently to consider each problem in isolation, starting from scratch every time a new learning problem is encountered or a new model is proposed. This is a perfectly feasible solution when the problems are sufficiently easy or, if the problem is hard when a large amount of resources, both in terms of the training data and computation, are available. Although this naive approach has been the main focus of research in machine learning for a few decades and had a lot of success, it becomes infeasible if the problem is too hard in proportion to the available resources. When using a complex model in this naive approach, it is necessary to collect large data sets (if possible at all) to avoid overfitting and hence it is also necessary to use large computational resources to handle the increased amount of data, first during training to process a large data set and then also at test time to execute a complex model. An alternative to this strategy of treating each learning problem independently is to leverage related data sets and computation encapsulated in previously trained models. By doing that we can decrease the amount of data necessary to reach a satisfactory level of performance and, consequently, improve the accuracy achievable and decrease training time. Our attack on this problem is to exploit diversity - in the structure of the data set, in the features learnt and in the inductive biases of different neural network architectures. In the setting of learning from multiple sources we introduce multiple-source cross-validation, which gives an unbiased estimator of the test error when the data set is composed of data coming from multiple sources and the data at test time are coming from a new unseen source. We also propose new estimators of variance of the standard k-fold cross-validation and multiple-source cross-validation, which have lower bias than previously known ones. To improve unsupervised learning we introduce scheduled denoising autoencoders, which learn a more diverse set of features than the standard denoising auto-encoder. This is thanks to their training procedure, which starts with a high level of noise, when the network is learning coarse features and then the noise is lowered gradually, which allows the network to learn some more local features. A connection between this training procedure and curriculum learning is also drawn. We develop further the idea of learning a diverse representation by explicitly incorporating the goal of obtaining a diverse representation into the training objective. The proposed model, the composite denoising autoencoder, learns multiple subsets of features focused on modelling variations in the data set at different levels of granularity. Finally, we introduce the idea of model blending, a variant of model compression, in which the two models, the teacher and the student, are both strong models, but different in their inductive biases. As an example, we train convolutional networks using the guidance of bidirectional long short-term memory (LSTM) networks. This allows to train the convolutional neural network to be more accurate than the LSTM network at no extra cost at test time
    • …
    corecore