13 research outputs found

    30 Jahre „Geschichte der Gouvernementalität“:: Wir brauchen mehr Geschichte des Wissens

    Get PDF
    An increasing number of studies have appeared that together come under the label of governmentality. The topics show that an analysis related to practices can integrate a broad spectrum of social phenomena that is out of the range of conventional theories of the state. Less well known is the founding text of governmentality studies. Foucault‘s “history of governmentality” is a genealogy of liberal governments, ending with the rise of the recent so-called neoliberal transformation of the 1970s. The weakness of that text is that it fails to show the connections of the concept of governmentality to epistemology. Also, recent studies mention the power relations of knowledge only globally, leaving science in a sphere of its own. However, originality and strength of the analysis of governmental power depends on its linkage to a history of knowledge

    For science, love and money: the social worlds of poultry and rabbit breeding in Britain, 1900-1940

    Get PDF
    This paper traces the joint histories of poultry and rabbit breeding by fanciers, and for commercial and scientific purposes, in early 20th-century Britain. I show that the histories of the social worlds that bred for these different purposes are intertwined, as are the histories of poultry and rabbit breeding in general. To properly understand the history of scientific breeding we must therefore understand the general context of breeding in which this occurred. In the paper I show that as fancy poultry and rabbits were taken up for scientific research at the start of the 20th century they became scientific specimens and boundary objects between the social worlds. Their existence as boundary objects motivated the social worlds to coordinate their work through translators and trading zones. By the 1930s all three coordination methods were being used simultaneously

    Performance Analysis of Federated Learning Algorithms for Multilingual Protest News Detection Using Pre-Trained DistilBERT and BERT

    No full text
    Data scientists in the Natural Language Processing (NLP) field confront the challenge of reconciling the necessity for data-centric analyses with the imperative to safeguard sensitive information, all while managing the substantial costs linked to the collection process of training data. In a Federated Learning (FL) system, these challenges can be alleviated by the training of a global model, eliminating the need to centralize sensitive data of clients. However, distributed NLP data is usually Non-Independent and Identically Distributed (Non-IID), which leads to poorer generalizability of the global model when trained with Federated Averaging (FedAvg). Recently proposed extensions to FedAvg promise to improve the global model performance on Non-IID data. Yet, such advanced FL algorithms trained on multilingual Non-IID texts have not been studied in industry and academia in detail. This paper compares, for the first time, the FL algorithms: FedAvg, FedAvgM, FedYogi, FedAdam and FedAdagrad for a binary text classification task using 12078 tailored real-world news reports in English, Portuguese, Spanish and Hindi. For this objective, pre-trained DistilBERT and BERT models fine-tuned with these texts are used. The paper results show that FedYogi is the most stable and robust FL algorithm when DistilBERT is used, achieving an average macro F1 score of 0.7789 for IID and 0.7755 for Non-IID protest news. The study also exhibits that BERT models trained with weighted FedAvg and FedAvgM can achieve a similar prediction power as centralized language models, demonstrating the potential of leveraging FL in the NLP domain without the need to collect data centrally

    Leveraging human expert image annotations to improve pneumonia differentiation through human knowledge distillation

    No full text
    Abstract In medical imaging, deep learning models can be a critical tool to shorten time-to-diagnosis and support specialized medical staff in clinical decision making. The successful training of deep learning models usually requires large amounts of quality data, which are often not available in many medical imaging tasks. In this work we train a deep learning model on university hospital chest X-ray data, containing 1082 images. The data was reviewed, differentiated into 4 causes for pneumonia, and annotated by an expert radiologist. To successfully train a model on this small amount of complex image data, we propose a special knowledge distillation process, which we call Human Knowledge Distillation. This process enables deep learning models to utilize annotated regions in the images during the training process. This form of guidance by a human expert improves model convergence and performance. We evaluate the proposed process on our study data for multiple types of models, all of which show improved results. The best model of this study, called PneuKnowNet, shows an improvement of + 2.3% points in overall accuracy compared to a baseline model and also leads to more meaningful decision regions. Utilizing this implicit data quality-quantity trade-off can be a promising approach for many scarce data domains beyond medical imaging

    Leveraging human expert image annotations to improve pneumonia differentiation through human knowledge distillation

    No full text
    In medical imaging, deep learning models can be a critical tool to shorten time-to-diagnosis and support specialized medical staff in clinical decision making. The successful training of deep learning models usually requires large amounts of quality data, which are often not available in many medical imaging tasks. In this work we train a deep learning model on university hospital chest X-ray data, containing 1082 images. The data was reviewed, differentiated into 4 causes for pneumonia, and annotated by an expert radiologist. To successfully train a model on this small amount of complex image data, we propose a special knowledge distillation process, which we call Human Knowledge Distillation. This process enables deep learning models to utilize annotated regions in the images during the training process. This form of guidance by a human expert improves model convergence and performance. We evaluate the proposed process on our study data for multiple types of models, all of which show improved results. The best model of this study, called PneuKnowNet, shows an improvement of + 2.3% points in overall accuracy compared to a baseline model and also leads to more meaningful decision regions. Utilizing this implicit data quality-quantity trade-off can be a promising approach for many scarce data domains beyond medical imaging

    Gegen|Wissen

    No full text
    Covid-19, Klimakrise, Big Tech, algorithmic bias, #MeToo: Wissen, Wissenschaft und Technologie hat sich in den vergangenen Jahren in einem Maße politisiert wie letztmals um 1980. Damals waren die Themen noch andere: Waldsterben, Ozonloch, Atomkatastrophen, Genmanipulation, Automatisierung. Mit der Kritik an der bestehenden Wissensordnung entstand innerhalb von sozialen Bewegungen, bald aber auch in Politik, Wirtschaft und der offiziellen Wissenschaft das Bedürfnis nach alternativen Formen von Wissen: »Gegenwissen«. Was war dieses Gegenwissen? Wo war es erfolgreich? Wo scheiterte es? Und warum ist es heute wieder aktuell? Diesen Fragen widmet sich der erste Band von cache, der die Recherchen von zwölf Wissenschafts- und Technikhistoriker*innen aus der Schweiz, Deutschland und Österreich miteinander verschaltet
    corecore