1,933 research outputs found

    Non-Market Food Practices Do Things Markets Cannot: Why Vermonters Produce and Distribute Food That\u27s Not For Sale

    Get PDF
    Researchers tend to portray food self-provisioning in high-income societies as a coping mechanism for the poor or a hobby for the well-off. They describe food charity as a regrettable band-aid. Vegetable gardens and neighborly sharing are considered remnants of precapitalist tradition. These are non-market food practices: producing food that is not for sale and distributing food in ways other than selling it. Recent scholarship challenges those standard understandings by showing (i) that non-market food practices remain prevalent in high-income countries, (ii) that people in diverse social groups engage in these practices, and (iii) that they articulate diverse reasons for doing so. In this dissertation, I investigate the persistent pervasiveness of non-market food practices in Vermont. To go beyond explanations that rely on individual motivation, I examine the roles these practices play in society. First, I investigate the prevalence of non-market food practices. Several surveys with large, representative samples reveal that more than half of Vermont households grow, hunt, fish, or gather some of their own food. Respondents estimate that they acquire 14% of the food they consume through non-market means, on average. For reference, commercial local food makes up about the same portion of total consumption. Then, drawing on the words of 94 non-market food practitioners I interviewed, I demonstrate that these practices serve functions that markets cannot. Interviewees attested that non-market distribution is special because it feeds the hungry, strengthens relationships, builds resilience, puts edible-but-unsellable food to use, and aligns with a desired future in which food is not for sale. Hunters, fishers, foragers, scavengers, and homesteaders said that these activities contribute to their long-run food security as a skills-based safety net. Self-provisioning allows them to eat from the landscape despite disruptions to their ability to access market food such as job loss, supply chain problems, or a global pandemic. Additional evidence from vegetable growers suggests that non-market settings liberate production from financial discipline, making space for work that is meaningful, playful, educational, and therapeutic. Non-market food practices mend holes in the social fabric torn by the commodification of everyday life. Finally, I synthesize scholarly critiques of markets as institutions for organizing the production and distribution of food. Markets send food toward money rather than hunger. Producing for market compels farmers to prioritize financial viability over other values such as stewardship. Historically, people rarely if ever sell each other food until external authorities coerce them to do so through taxation, indebtedness, cutting off access to the means of subsistence, or extinguishing non-market institutions. Today, more humans than ever suffer from chronic undernourishment even as the scale of commercial agriculture pushes environmental pressures past critical thresholds of planetary sustainability. This research substantiates that alternatives to markets exist and have the potential to address their shortcomings

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    The development of an international model for technology adoption: the case of Hong Kong

    Get PDF
    The purpose of this study is to examine the causal relationships between the internal beliefs formation of a decision-maker on technology adoption and the extent of the development of a technology adoptive behaviour. In particular, this study aims to develop an International Model For Technology Adoption (IMTA), which builds upon the Theory of Planned Behaviour (Ajzen 1992) and improves on the framework of the Technology Acceptance Model (Davis 1986). The development of such a model requires an understanding of the environmental factors which shape the cognitive processes of the decision maker. Hence, this is a behavioural model which investigates the constructs influencing the adoption behaviour and how the interaction between these constructs and the external variables can impact on the decision making process at the level of the firm. Previous research on technology transfer and innovation diffusion has classified factors affecting the diffusion process into two dimensions: 1) external-influence and 2) internal-influence. Hence, in this research, the International Model For Technology Adoption looks at how the endogenous and exogenous factors enter into the cognitive process of a technology adoption decision through which attitudes and behavioural intentions are shaped. Under the IMTA, the behavioural intention to adopt is a function of two exogenous variables, 1) Strategic Choice, and 2) Environmental Control. The Environmental Control factor is further categorised by two exogenous factors, namely, 1) Government Influence, and 2) Competitive Influence. In addition, the Competitive Influence factor is, in turn, classified into five forces: namely, 1) Industry Structure, 2) Price Intensity, 3) Demand Uncertainty, 4) Information Exposure, 5) Domestic Availability. Regarding the cognitive process which forms the attitude to adopt, it is hypothesised to be affected by six other endogenous beliefs: 1) Compatibility; 2) Enhanced Value; 3) Perceived Benefits; 4)Adaptative Experiences, 5) Perceived Difficulty; and 6) Suppliers’ Commitment. A survey research method was utilised in this study and the research instrument was developed after a comprehensive review of the relevant literature and an expert interview. A total of 298 completed questionnaires were returned; giving a response rate of 13.56%. Of the 298 questionnaires, 39 of the responses were unusable with missing date. This gives a total of 259 usable questionnaires and an effective response rate of 11.78%. The results of the analysis suggested that the fitness of the International Model For Technology Adoption was good and the data of this study supported the overall structure of the IMTA. When compared with the null model, which was used by the EQS as a baseline model to judge to overall fitness for the IMTA, the IMTA yielded a value of 0.914 in the Comparative Fit index; hence, indication of a good fit model. In addition, the results of the principal component analysis also illustrated that the 16-factor International Model For Technology Adoption was an adequate model to capture the information collected during the survey. The results shown that this 16-factor structure represented nearly 77% of the total variance of all items. A further analysis into the factor structure, again, revealed that there existed a perfect match between the conceptual dimensionality of the International Model For Technology Adoption and the empirical data collected in the survey. However, the results of the hypotheses testing on the individual constructs were mixed. While not all the magnitude of these ten hypotheses was statistically significant, almost all pointed to the direction conceptualised by the IMTA. From these results, it can be interpreted that while the results of the structural equation modelling analysis provided overall support to the International Model For Technology Adoption, the results of individual constructs of the Model revealed that some constructs were forming a larger impact than others in the decision making process to adopt foreign technology. In particular, the intention to adopt was greatly affected by the attitude of the prospective adopters, the influence of the government and the degree of industry rivalry. However, the impact of the overall competitive influence factor on the intention to adopt was not supported by the results. Again, the existence of investment alternative was also not a serious concern for the prospective adopters

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    The Application of Data Analytics Technologies for the Predictive Maintenance of Industrial Facilities in Internet of Things (IoT) Environments

    Get PDF
    In industrial production environments, the maintenance of equipment has a decisive influence on costs and on the plannability of production capacities. In particular, unplanned failures during production times cause high costs, unplanned downtimes and possibly additional collateral damage. Predictive Maintenance starts here and tries to predict a possible failure and its cause so early that its prevention can be prepared and carried out in time. In order to be able to predict malfunctions and failures, the industrial plant with its characteristics, as well as wear and ageing processes, must be modelled. Such modelling can be done by replicating its physical properties. However, this is very complex and requires enormous expert knowledge about the plant and about wear and ageing processes of each individual component. Neural networks and machine learning make it possible to train such models using data and offer an alternative, especially when very complex and non-linear behaviour is evident. In order for models to make predictions, as much data as possible about the condition of a plant and its environment and production planning data is needed. In Industrial Internet of Things (IIoT) environments, the amount of available data is constantly increasing. Intelligent sensors and highly interconnected production facilities produce a steady stream of data. The sheer volume of data, but also the steady stream in which data is transmitted, place high demands on the data processing systems. If a participating system wants to perform live analyses on the incoming data streams, it must be able to process the incoming data at least as fast as the continuous data stream delivers it. If this is not the case, the system falls further and further behind in processing and thus in its analyses. This also applies to Predictive Maintenance systems, especially if they use complex and computationally intensive machine learning models. If sufficiently scalable hardware resources are available, this may not be a problem at first. However, if this is not the case or if the processing takes place on decentralised units with limited hardware resources (e.g. edge devices), the runtime behaviour and resource requirements of the type of neural network used can become an important criterion. This thesis addresses Predictive Maintenance systems in IIoT environments using neural networks and Deep Learning, where the runtime behaviour and the resource requirements are relevant. The question is whether it is possible to achieve better runtimes with similarly result quality using a new type of neural network. The focus is on reducing the complexity of the network and improving its parallelisability. Inspired by projects in which complexity was distributed to less complex neural subnetworks by upstream measures, two hypotheses presented in this thesis emerged: a) the distribution of complexity into simpler subnetworks leads to faster processing overall, despite the overhead this creates, and b) if a neural cell has a deeper internal structure, this leads to a less complex network. Within the framework of a qualitative study, an overall impression of Predictive Maintenance applications in IIoT environments using neural networks was developed. Based on the findings, a novel model layout was developed named Sliced Long Short-Term Memory Neural Network (SlicedLSTM). The SlicedLSTM implements the assumptions made in the aforementioned hypotheses in its inner model architecture. Within the framework of a quantitative study, the runtime behaviour of the SlicedLSTM was compared with that of a reference model in the form of laboratory tests. The study uses synthetically generated data from a NASA project to predict failures of modules of aircraft gas turbines. The dataset contains 1,414 multivariate time series with 104,897 samples of test data and 160,360 samples of training data. As a result, it could be proven for the specific application and the data used that the SlicedLSTM delivers faster processing times with similar result accuracy and thus clearly outperforms the reference model in this respect. The hypotheses about the influence of complexity in the internal structure of the neuronal cells were confirmed by the study carried out in the context of this thesis

    Introduction to Psychology

    Get PDF
    Introduction to Psychology is a modified version of Psychology 2e - OpenStax

    Robots That Ask For Help: Uncertainty Alignment for Large Language Model Planners

    Full text link
    Large language models (LLMs) exhibit a wide range of promising capabilities -- from step-by-step planning to commonsense reasoning -- that may provide utility for robots, but remain prone to confidently hallucinated predictions. In this work, we present KnowNo, which is a framework for measuring and aligning the uncertainty of LLM-based planners such that they know when they don't know and ask for help when needed. KnowNo builds on the theory of conformal prediction to provide statistical guarantees on task completion while minimizing human help in complex multi-step planning settings. Experiments across a variety of simulated and real robot setups that involve tasks with different modes of ambiguity (e.g., from spatial to numeric uncertainties, from human preferences to Winograd schemas) show that KnowNo performs favorably over modern baselines (which may involve ensembles or extensive prompt tuning) in terms of improving efficiency and autonomy, while providing formal assurances. KnowNo can be used with LLMs out of the box without model-finetuning, and suggests a promising lightweight approach to modeling uncertainty that can complement and scale with the growing capabilities of foundation models. Website: https://robot-help.github.ioComment: Conference on Robot Learning (CoRL) 2023, Oral Presentatio

    Microcredentials to support PBL

    Get PDF

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
    corecore