686 research outputs found

    Non-Market Food Practices Do Things Markets Cannot: Why Vermonters Produce and Distribute Food That\u27s Not For Sale

    Get PDF
    Researchers tend to portray food self-provisioning in high-income societies as a coping mechanism for the poor or a hobby for the well-off. They describe food charity as a regrettable band-aid. Vegetable gardens and neighborly sharing are considered remnants of precapitalist tradition. These are non-market food practices: producing food that is not for sale and distributing food in ways other than selling it. Recent scholarship challenges those standard understandings by showing (i) that non-market food practices remain prevalent in high-income countries, (ii) that people in diverse social groups engage in these practices, and (iii) that they articulate diverse reasons for doing so. In this dissertation, I investigate the persistent pervasiveness of non-market food practices in Vermont. To go beyond explanations that rely on individual motivation, I examine the roles these practices play in society. First, I investigate the prevalence of non-market food practices. Several surveys with large, representative samples reveal that more than half of Vermont households grow, hunt, fish, or gather some of their own food. Respondents estimate that they acquire 14% of the food they consume through non-market means, on average. For reference, commercial local food makes up about the same portion of total consumption. Then, drawing on the words of 94 non-market food practitioners I interviewed, I demonstrate that these practices serve functions that markets cannot. Interviewees attested that non-market distribution is special because it feeds the hungry, strengthens relationships, builds resilience, puts edible-but-unsellable food to use, and aligns with a desired future in which food is not for sale. Hunters, fishers, foragers, scavengers, and homesteaders said that these activities contribute to their long-run food security as a skills-based safety net. Self-provisioning allows them to eat from the landscape despite disruptions to their ability to access market food such as job loss, supply chain problems, or a global pandemic. Additional evidence from vegetable growers suggests that non-market settings liberate production from financial discipline, making space for work that is meaningful, playful, educational, and therapeutic. Non-market food practices mend holes in the social fabric torn by the commodification of everyday life. Finally, I synthesize scholarly critiques of markets as institutions for organizing the production and distribution of food. Markets send food toward money rather than hunger. Producing for market compels farmers to prioritize financial viability over other values such as stewardship. Historically, people rarely if ever sell each other food until external authorities coerce them to do so through taxation, indebtedness, cutting off access to the means of subsistence, or extinguishing non-market institutions. Today, more humans than ever suffer from chronic undernourishment even as the scale of commercial agriculture pushes environmental pressures past critical thresholds of planetary sustainability. This research substantiates that alternatives to markets exist and have the potential to address their shortcomings

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Making Connections: A Handbook for Effective Formal Mentoring Programs in Academia

    Get PDF
    This book, Making Connections: A Handbook for Effective Formal Mentoring Programs in Academia, makes a unique and needed contribution to the mentoring field as it focuses solely on mentoring in academia. This handbook is a collaborative institutional effort between Utah State University’s (USU) Empowering Teaching Open Access Book Series and the Mentoring Institute at the University of New Mexico (UNM). This book is available through (a) an e-book through Pressbooks, (b) a downloadable PDF version on USU’s Open Access Book Series website), and (c) a print version available for purchase on the USU Empower Teaching Open Access page, and on Amazon

    MPC for Tech Giants (GMPC): Enabling Gulliver and the Lilliputians to Cooperate Amicably

    Get PDF
    In the current digital world, large organizations (sometimes referred to as tech giants) provide service to extremely large numbers of users. The service provider is often interested in computing various data analyses over the private data of its users, which in turn have their incentives to cooperate, but do not necessarily trust the service provider. In this work, we introduce the \emph{Gulliver multi-party computation model} (GMPC) to realistically capture the above scenario. The GMPC model considers a single highly powerful party, called the {\em server} or {\em Gulliver}, that is connected to nn users over a star topology network (alternatively formulated as a full network, where the server can block any message). The users are significantly less powerful than the server, and, in particular, should have both computation and communication complexities that are polylogarithmic in nn. Protocols in the GMPC model should be secure against malicious adversaries that may corrupt a subset of the users and/or the server. Designing protocols in the GMPC model is a delicate task, since users can only hold information about polylog(n)\operatorname{polylog}(n) other users (and, in particular, can only communicate with polylog(n)\operatorname{polylog}(n) other users). In addition, the server can block any message between any pair of honest parties. Thus, reaching an agreement becomes a challenging task. Nevertheless, we design generic protocols in the GMPC model, assuming that at most α<1/8\alpha<1/8 fraction of the users may be corrupted (in addition to the server). Our main contribution is a variant of Feige\u27s committee election protocol [FOCS 1999] that is secure in the GMPC model. Given this tool we show: 1. Assuming fully homomorphic encryption (FHE), any computationally efficient function with O(npolylog(n))O\left(n\cdot\operatorname{polylog}(n)\right)-size output can be securely computed in the GMPC model. 2. Any function that can be computed by a circuit of O(polylog(n))O(\operatorname{polylog}(n)) depth, O(npolylog(n))O\left(n\cdot\operatorname{polylog}(n)\right) size, and bounded fan-in and fan-out can be securely computed in the GMPC model {\em without assuming FHE}. 3. In particular, {\em sorting} can be securely computed in the GMPC model without assuming FHE. This has important applications for the {\emph shuffle model of differential privacy}, and resolves an open question of Bell et al. [CCS 2020]

    If interpretability is the answer, what is the question?

    Get PDF
    Due to the ability to model even complex dependencies, machine learning (ML) can be used to tackle a broad range of (high-stakes) prediction problems. The complexity of the resulting models comes at the cost of transparency, meaning that it is difficult to understand the model by inspecting its parameters. This opacity is considered problematic since it hampers the transfer of knowledge from the model, undermines the agency of individuals affected by algorithmic decisions, and makes it more challenging to expose non-robust or unethical behaviour. To tackle the opacity of ML models, the field of interpretable machine learning (IML) has emerged. The field is motivated by the idea that if we could understand the model's behaviour -- either by making the model itself interpretable or by inspecting post-hoc explanations -- we could also expose unethical and non-robust behaviour, learn about the data generating process, and restore the agency of affected individuals. IML is not only a highly active area of research, but the developed techniques are also widely applied in both industry and the sciences. Despite the popularity of IML, the field faces fundamental criticism, questioning whether IML actually helps in tackling the aforementioned problems of ML and even whether it should be a field of research in the first place: First and foremost, IML is criticised for lacking a clear goal and, thus, a clear definition of what it means for a model to be interpretable. On a similar note, the meaning of existing methods is often unclear, and thus they may be misunderstood or even misused to hide unethical behaviour. Moreover, estimating conditional-sampling-based techniques poses a significant computational challenge. With the contributions included in this thesis, we tackle these three challenges for IML. We join a range of work by arguing that the field struggles to define and evaluate "interpretability" because incoherent interpretation goals are conflated. However, the different goals can be disentangled such that coherent requirements can inform the derivation of the respective target estimands. We demonstrate this with the examples of two interpretation contexts: recourse and scientific inference. To tackle the misinterpretation of IML methods, we suggest deriving formal interpretation rules that link explanations to aspects of the model and data. In our work, we specifically focus on interpreting feature importance. Furthermore, we collect interpretation pitfalls and communicate them to a broader audience. To efficiently estimate conditional-sampling-based interpretation techniques, we propose two methods that leverage the dependence structure in the data to simplify the estimation problems for Conditional Feature Importance (CFI) and SAGE. A causal perspective proved to be vital in tackling the challenges: First, since IML problems such as algorithmic recourse are inherently causal; Second, since causality helps to disentangle the different aspects of model and data and, therefore, to distinguish the insights that different methods provide; And third, algorithms developed for causal structure learning can be leveraged for the efficient estimation of conditional-sampling based IML methods.Aufgrund der Fähigkeit, selbst komplexe Abhängigkeiten zu modellieren, kann maschinelles Lernen (ML) zur Lösung eines breiten Spektrums von anspruchsvollen Vorhersageproblemen eingesetzt werden. Die Komplexität der resultierenden Modelle geht auf Kosten der Interpretierbarkeit, d. h. es ist schwierig, das Modell durch die Untersuchung seiner Parameter zu verstehen. Diese Undurchsichtigkeit wird als problematisch angesehen, da sie den Wissenstransfer aus dem Modell behindert, sie die Handlungsfähigkeit von Personen, die von algorithmischen Entscheidungen betroffen sind, untergräbt und sie es schwieriger macht, nicht robustes oder unethisches Verhalten aufzudecken. Um die Undurchsichtigkeit von ML-Modellen anzugehen, hat sich das Feld des interpretierbaren maschinellen Lernens (IML) entwickelt. Dieses Feld ist von der Idee motiviert, dass wir, wenn wir das Verhalten des Modells verstehen könnten - entweder indem wir das Modell selbst interpretierbar machen oder anhand von post-hoc Erklärungen - auch unethisches und nicht robustes Verhalten aufdecken, über den datengenerierenden Prozess lernen und die Handlungsfähigkeit betroffener Personen wiederherstellen könnten. IML ist nicht nur ein sehr aktiver Forschungsbereich, sondern die entwickelten Techniken werden auch weitgehend in der Industrie und den Wissenschaften angewendet. Trotz der Popularität von IML ist das Feld mit fundamentaler Kritik konfrontiert, die in Frage stellt, ob IML tatsächlich dabei hilft, die oben genannten Probleme von ML anzugehen, und ob es überhaupt ein Forschungsgebiet sein sollte: In erster Linie wird an IML kritisiert, dass es an einem klaren Ziel und damit an einer klaren Definition dessen fehlt, was es für ein Modell bedeutet, interpretierbar zu sein. Weiterhin ist die Bedeutung bestehender Methoden oft unklar, so dass sie missverstanden oder sogar missbraucht werden können, um unethisches Verhalten zu verbergen. Letztlich stellt die Schätzung von auf bedingten Stichproben basierenden Verfahren eine erhebliche rechnerische Herausforderung dar. In dieser Arbeit befassen wir uns mit diesen drei grundlegenden Herausforderungen von IML. Wir schließen uns der Argumentation an, dass es schwierig ist, "Interpretierbarkeit" zu definieren und zu bewerten, weil inkohärente Interpretationsziele miteinander vermengt werden. Die verschiedenen Ziele lassen sich jedoch entflechten, sodass kohärente Anforderungen die Ableitung der jeweiligen Zielgrößen informieren. Wir demonstrieren dies am Beispiel von zwei Interpretationskontexten: algorithmischer Regress und wissenschaftliche Inferenz. Um der Fehlinterpretation von IML-Methoden zu begegnen, schlagen wir vor, formale Interpretationsregeln abzuleiten, die Erklärungen mit Aspekten des Modells und der Daten verknüpfen. In unserer Arbeit konzentrieren wir uns speziell auf die Interpretation von sogenannten Feature Importance Methoden. Darüber hinaus tragen wir wichtige Interpretationsfallen zusammen und kommunizieren sie an ein breiteres Publikum. Zur effizienten Schätzung auf bedingten Stichproben basierender Interpretationstechniken schlagen wir zwei Methoden vor, die die Abhängigkeitsstruktur in den Daten nutzen, um die Schätzprobleme für Conditional Feature Importance (CFI) und SAGE zu vereinfachen. Eine kausale Perspektive erwies sich als entscheidend für die Bewältigung der Herausforderungen: Erstens, weil IML-Probleme wie der algorithmische Regress inhärent kausal sind; zweitens, weil Kausalität hilft, die verschiedenen Aspekte von Modell und Daten zu entflechten und somit die Erkenntnisse, die verschiedene Methoden liefern, zu unterscheiden; und drittens können wir Algorithmen, die für das Lernen kausaler Struktur entwickelt wurden, für die effiziente Schätzung von auf bindingten Verteilungen basierenden IML-Methoden verwenden

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    An Ethnographic Examination of Belonging and Hype in Web3 Communities

    Get PDF
    Advocates of the Web3 movement want the next stage of the internet’s evolution to be characterized by the decentralization of virtual assets and the democratization of digital participation via possibilities afforded by blockchain technology. In this ethnography, the researcher contrasts the ideologies and material practices of individuals and communities composing said Web3 schema, including enthusiasts of the emergent metaverse, non-fungible tokens (NFTs), and blockchain-based institutions called decentralized autonomous organizations (DAOs) that rely on a blend of human and algorithmic governance to operate. Based on fieldwork in London, England during the summer of 2022, this thesis uncovers patterns of hype making and hype ambivalence that inform belonging within Web3 spaces and conversely establish parameters for exclusion. The auratic qualities of NFTs the researcher acquired at Proof of People, a three-day NFT and metaverse festival hosted in London’s Fabric nightclub, are also investigated. With data compiled from a mix of in-person and virtual settings, the claims presented in this thesis arrive at a critical moment for crypto. Universal devaluation of blockchain assets following a market freefall in 2021 has battered the decentralized movement, but the Web3 informants featured herein remain optimistic about the vitality of a digital paradigm in its alleged nascency

    Development and evaluation of a behavior change support system targeting learning behavior: a technology-based approach to complement the education of future executives using persuasive systems in higher education

    Get PDF
    Learning is crucial in today's information societies, and the need for comprehensive and accessible support systems to enhance learning competencies is increasingly evident. In this context, this dissertation provides descriptive knowledge about the demands for such a system, aiming to train higher education students and equip them with learning competencies. Drawing on design science research, the dissertation addresses the identified demands, considers technical frameworks and psychological models to design technology-based artifacts. Against this background, the dissertation provides a pragmatic contribution through novel artifacts in form of Behavior Change Support Systems targeting self-regulated learning in higher education. The evaluation of these artifacts extends prior design knowledge through specific recommendations, including design principles, that can guide the implementation of Behavior Change Support Systems and further technology-based interventions in higher education. These recommendations aim to promote the development of necessary competencies within the higher education context

    Acting-arrangement Ontology Introduced

    Get PDF
    In this thesis I begin the task of setting out and arguing for an account of the ontology of the world. The account of ontology I propose, acting-arrangement ontology (AAO), is radically new - it follows from a recognition that two seemingly prosaic and previously underappreciated features of our world are, I shall argue, the twin keystones of its ontology: acting and arrangement. My aim in this preliminary work is to develop and set out the main pieces of this ontology, showing how they fit together to form a compelling whole, so as to make a case that this account of ontology warrants further investigation
    corecore