3,636 research outputs found
Non-Market Food Practices Do Things Markets Cannot: Why Vermonters Produce and Distribute Food That\u27s Not For Sale
Researchers tend to portray food self-provisioning in high-income societies as a coping mechanism for the poor or a hobby for the well-off. They describe food charity as a regrettable band-aid. Vegetable gardens and neighborly sharing are considered remnants of precapitalist tradition. These are non-market food practices: producing food that is not for sale and distributing food in ways other than selling it. Recent scholarship challenges those standard understandings by showing (i) that non-market food practices remain prevalent in high-income countries, (ii) that people in diverse social groups engage in these practices, and (iii) that they articulate diverse reasons for doing so. In this dissertation, I investigate the persistent pervasiveness of non-market food practices in Vermont. To go beyond explanations that rely on individual motivation, I examine the roles these practices play in society.
First, I investigate the prevalence of non-market food practices. Several surveys with large, representative samples reveal that more than half of Vermont households grow, hunt, fish, or gather some of their own food. Respondents estimate that they acquire 14% of the food they consume through non-market means, on average. For reference, commercial local food makes up about the same portion of total consumption.
Then, drawing on the words of 94 non-market food practitioners I interviewed, I demonstrate that these practices serve functions that markets cannot. Interviewees attested that non-market distribution is special because it feeds the hungry, strengthens relationships, builds resilience, puts edible-but-unsellable food to use, and aligns with a desired future in which food is not for sale. Hunters, fishers, foragers, scavengers, and homesteaders said that these activities contribute to their long-run food security as a skills-based safety net. Self-provisioning allows them to eat from the landscape despite disruptions to their ability to access market food such as job loss, supply chain problems, or a global pandemic. Additional evidence from vegetable growers suggests that non-market settings liberate production from financial discipline, making space for work that is meaningful, playful, educational, and therapeutic. Non-market food practices mend holes in the social fabric torn by the commodification of everyday life.
Finally, I synthesize scholarly critiques of markets as institutions for organizing the production and distribution of food. Markets send food toward money rather than hunger. Producing for market compels farmers to prioritize financial viability over other values such as stewardship. Historically, people rarely if ever sell each other food until external authorities coerce them to do so through taxation, indebtedness, cutting off access to the means of subsistence, or extinguishing non-market institutions. Today, more humans than ever suffer from chronic undernourishment even as the scale of commercial agriculture pushes environmental pressures past critical thresholds of planetary sustainability. This research substantiates that alternatives to markets exist and have the potential to address their shortcomings
Strategy Tripod Perspective on the Determinants of Airline Efficiency in A Global Context: An Application of DEA and Tobit Analysis
The airline industry is vital to contemporary civilization since it is a key player in the globalization process: linking regions, fostering global commerce, promoting tourism and aiding economic and social progress. However, there has been little study on the link between the operational environment and airline efficiency. Investigating the amalgamation of institutions, organisations and strategic decisions is critical to understanding how airlines operate efficiently.
This research aims to employ the strategy tripod perspective to investigate the efficiency of a global airline sample using a non-parametric linear programming method (data envelopment analysis [DEA]). Using a Tobit regression, the bootstrapped DEA efficiency change scores are further regressed to determine the drivers of efficiency. The strategy tripod is employed to assess the impact of institutions, industry and resources on airline efficiency. Institutions are measured by global indices of destination attractiveness; industry, including competition, jet fuel and business model; and finally, resources, such as the number of full-time employees, alliances, ownership and connectivity. The first part of the study uses panel data from 35 major airlines, collected from their annual reports for the period 2011 to 2018, and country attractiveness indices from global indicators. The second part of the research involves a qualitative data collection approach and semi-structured interviews with experts in the field to evaluate the impact of COVID-19 on the first part’s significant findings.
The main findings reveal that airlines operate at a highly competitive level regardless of their competition intensity or origin. Furthermore, the unpredictability of the environment complicates airline operations. The efficiency drivers of an airline are partially determined by its type of business model, its degree of cooperation and how fuel cost is managed. Trade openness has a negative influence on airline efficiency. COVID-19 has toppled the airline industry, forcing airlines to reconsider their business model and continuously increase cooperation. Human resources, sustainability and alternative fuel sources are critical to airline survival. Finally, this study provides some evidence for the practicality of the strategy tripod and hints at the need for a broader approach in the study of international strategies
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
On the locality of indistinguishable quantum systems
This thesis investigates local realism in quantum indistinguishable particle systems, focusing on bosonic, fermionic, and 2D non-abelian anyonic systems. The local realism of quantum indistinguishable particle systems is asserted. It proves annihilation operators represent the local ontic states in these systems. It closes the literature gap on obtaining Deutsch-Hayden descriptors in indistinguishable particle systems. The prima facie paradox of action at a distance using fermionic annihilation operators as descriptors is resolved. The work provides examples of using and interpreting the annihilation operators as local ontic states. It contains the novel construction and characterisation of the annihilation operators for 2 D non-abelian anyonic systems. The explicit form of Fibonacci anyon annihilation operators is provided, and their usefulness is shown in expressing the anyonic Hubbard model Hamiltonian algebraically. By studying the indistinguishable particle systems’ local realistic structure, the thesis showcases the relevance of the choice of subsystem lattice and exotic possible compositions of subsystems
Optimising heating and cooling of smart buildings
This thesis is concerned with optimization techniques to improve the efficiency of heating and
cooling of both existing and new buildings. We focus on the thermal demand-side and we make
novel contributions to the optimality of both design and operational questions. We demonstrate
that our four novel contributions can reduce operations cost and consumption, optimize retrofit
and estimate relevant parameters of the built environment. The ultimate objective of this work is
to provide affordable and cost-effective solutions that take advantage of local existing resources.
This work addresses four gaps in the state-of-the-art. First, we contribute to current building
practice that is mostly based on human experience and simulations, which often leads to oversized
heating systems and low efficiency. The results in this thesis show the advantages of using
optimization approaches for thermal aspects in buildings. We propose models that seek optimal
decisions for one specific design day, as well as an approach that optimizes multiple day-scenarios
to more accurately represent a whole year.
Second, we study the full potential of buildings’ thermal mass and design. This has not been
fully explored due to two factors: the complexity of the mathematics involved, and the fast developing
and variety of emerging technologies and approaches. We tackle the mathematical challenge by
solving non-linear non-convex models with integer decisions and by estimating building’s thermal
mass. We support rapid architectural development by studying flexible models able to adapt to
the latest building technologies such as passive house design, smart façades, and dynamic shadings.
Third, we consider flexibility provision to significantly reduce total energy costs. Flexibility
studies often only focus on flexible building loads but do not consider heating, which is often the
largest load of a building and is less flexible. Because of that, we study and model a building’s
heating demand and we propose optimization techniques to support greater flexibility of heating
loads, allowing buildings to participate more efficiently in providing demand response.
Fourth, we consider a building as an integrated system, unlike many other modelling approaches that focus on single aspects. We model a building as a complex system comprising the building’s structure, weather conditions and users’ requirements. Furthermore, we account for design decisions and for new and emerging technologies, such as heat pumps and thermal storage. Optimal decisions come from the joint analysis of all these interconnected factors.
The thesis is structured in three parts: the introduction, the main body and the conclusions. The main body is made by five chapters, each of which focuses on one research project and has the
following structure: overview, introduction, literature review, mathematical framework description,
application and results section, conclusion and future works. The first two chapters discuss the
optimization of operational aspects. The first focuses on a single thermal zone and the second in
two connected ones. The third chapter is a continuation of the first two, and presents an approach
to optimize both operations and design of buildings in a heat community. This approach integrates
the use of an energy software already in the market. The fourth chapter discusses an approach to
find the optimal refurbishment of an existing building at minimum cost. The fifth chapter shows
an inferring model to represent a house of a building stock. We study the common case where the
house’s data is lacking or inaccurate, and we present a model that is able to estimate the required
thermal parameters for modelling the house using only heating demand
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
2023-2024 Catalog
The 2023-2024 Governors State University Undergraduate and Graduate Catalog is a comprehensive listing of current information regarding:Degree RequirementsCourse OfferingsUndergraduate and Graduate Rules and Regulation
Learning and Control of Dynamical Systems
Despite the remarkable success of machine learning in various domains in recent years, our understanding of its fundamental limitations remains incomplete. This knowledge gap poses a grand challenge when deploying machine learning methods in critical decision-making tasks, where incorrect decisions can have catastrophic consequences. To effectively utilize these learning-based methods in such contexts, it is crucial to explicitly characterize their performance. Over the years, significant research efforts have been dedicated to learning and control of dynamical systems where the underlying dynamics are unknown or only partially known a priori, and must be inferred from collected data. However, much of these classical results have focused on asymptotic guarantees, providing limited insights into the amount of data required to achieve desired control performance while satisfying operational constraints such as safety and stability, especially in the presence of statistical noise.
In this thesis, we study the statistical complexity of learning and control of unknown dynamical systems. By utilizing recent advances in statistical learning theory, high-dimensional statistics, and control theoretic tools, we aim to establish a fundamental understanding of the number of samples required to achieve desired (i) accuracy in learning the unknown dynamics, (ii) performance in the control of the underlying system, and (iii) satisfaction of the operational constraints such as safety and stability. We provide finite-sample guarantees for these objectives and propose efficient learning and control algorithms that achieve the desired performance at these statistical limits in various dynamical systems. Our investigation covers a broad range of dynamical systems, starting from fully observable linear dynamical systems to partially observable linear dynamical systems, and ultimately, nonlinear systems.
We deploy our learning and control algorithms in various adaptive control tasks in real-world control systems and demonstrate their strong empirical performance along with their learning, robustness, and stability guarantees. In particular, we implement one of our proposed methods, Fourier Adaptive Learning and Control (FALCON), on an experimental aerodynamic testbed under extreme turbulent flow dynamics in a wind tunnel. The results show that FALCON achieves state-of-the-art stabilization performance and consistently outperforms conventional and other learning-based methods by at least 37%, despite using 8 times less data. The superior performance of FALCON arises from its physically and theoretically accurate modeling of the underlying nonlinear turbulent dynamics, which yields rigorous finite-sample learning and performance guarantees. These findings underscore the importance of characterizing the statistical complexity of learning and control of unknown dynamical systems.</p
Exact Community Recovery in the Geometric SBM
We study the problem of exact community recovery in the Geometric Stochastic
Block Model (GSBM), where each vertex has an unknown community label as well as
a known position, generated according to a Poisson point process in
. Edges are formed independently conditioned on the community
labels and positions, where vertices may only be connected by an edge if they
are within a prescribed distance of each other. The GSBM thus favors the
formation of dense local subgraphs, which commonly occur in real-world
networks, a property that makes the GSBM qualitatively very different from the
standard Stochastic Block Model (SBM). We propose a linear-time algorithm for
exact community recovery, which succeeds down to the information-theoretic
threshold, confirming a conjecture of Abbe, Baccelli, and Sankararaman. The
algorithm involves two phases. The first phase exploits the density of local
subgraphs to propagate estimated community labels among sufficiently occupied
subregions, and produces an almost-exact vertex labeling. The second phase then
refines the initial labels using a Poisson testing procedure. Thus, the GSBM
enjoys local to global amplification just as the SBM, with the advantage of
admitting an information-theoretically optimal, linear-time algorithm
- …