547 research outputs found
Gabriel Harvey and the History of Reading: Essays by Lisa Jardine and others
Few articles in the humanities have had the impact of Lisa Jardine and Anthony Grafton’s seminal ‘Studied for Action’ (1990), a study of the reading practices of Elizabethan polymath and prolific annotator Gabriel Harvey. Their excavation of the setting, methods and ambitions of Harvey’s encounters with his books ignited the History of Reading, an interdisciplinary field which quickly became one of the most exciting corners of the scholarly cosmos. A generation inspired by the model of Harvey fanned out across the world’s libraries and archives, seeking to reveal the many creative, unexpected and curious ways that individuals throughout history responded to texts, and how these interpretations in turn illuminate past worlds.
Three decades on, Harvey’s example and Jardine’s work remain central to cutting-edge scholarship in the History of Reading. By uniting ‘Studied for Action’ with published and unpublished studies on Harvey by Jardine, Grafton and the scholars they have influenced, this collection provides a unique lens on the place of marginalia in textual, intellectual and cultural history. The chapters capture subsequent work on Harvey and map the fields opened by Jardine and Grafton’s original article, collectively offering a posthumous tribute to Lisa Jardine and an authoritative overview of the History of Reading
Classical and quantum algorithms for scaling problems
This thesis is concerned with scaling problems, which have a plethora of connections to different areas of mathematics, physics and computer science. Although many structural aspects of these problems are understood by now, we only know how to solve them efficiently in special cases.We give new algorithms for non-commutative scaling problems with complexity guarantees that match the prior state of the art. To this end, we extend the well-known (self-concordance based) interior-point method (IPM) framework to Riemannian manifolds, motivated by its success in the commutative setting. Moreover, the IPM framework does not obviously suffer from the same obstructions to efficiency as previous methods. It also yields the first high-precision algorithms for other natural geometric problems in non-positive curvature.For the (commutative) problems of matrix scaling and balancing, we show that quantum algorithms can outperform the (already very efficient) state-of-the-art classical algorithms. Their time complexity can be sublinear in the input size; in certain parameter regimes they are also optimal, whereas in others we show no quantum speedup over the classical methods is possible. Along the way, we provide improvements over the long-standing state of the art for searching for all marked elements in a list, and computing the sum of a list of numbers.We identify a new application in the context of tensor networks for quantum many-body physics. We define a computable canonical form for uniform projected entangled pair states (as the solution to a scaling problem), circumventing previously known undecidability results. We also show, by characterizing the invariant polynomials, that the canonical form is determined by evaluating the tensor network contractions on networks of bounded size
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Effects of municipal smoke-free ordinances on secondhand smoke exposure in the Republic of Korea
ObjectiveTo reduce premature deaths due to secondhand smoke (SHS) exposure among non-smokers, the Republic of Korea (ROK) adopted changes to the National Health Promotion Act, which allowed local governments to enact municipal ordinances to strengthen their authority to designate smoke-free areas and levy penalty fines. In this study, we examined national trends in SHS exposure after the introduction of these municipal ordinances at the city level in 2010.MethodsWe used interrupted time series analysis to assess whether the trends of SHS exposure in the workplace and at home, and the primary cigarette smoking rate changed following the policy adjustment in the national legislation in ROK. Population-standardized data for selected variables were retrieved from a nationally representative survey dataset and used to study the policy action’s effectiveness.ResultsFollowing the change in the legislation, SHS exposure in the workplace reversed course from an increasing (18% per year) trend prior to the introduction of these smoke-free ordinances to a decreasing (−10% per year) trend after adoption and enforcement of these laws (β2 = 0.18, p-value = 0.07; β3 = −0.10, p-value = 0.02). SHS exposure at home (β2 = 0.10, p-value = 0.09; β3 = −0.03, p-value = 0.14) and the primary cigarette smoking rate (β2 = 0.03, p-value = 0.10; β3 = 0.008, p-value = 0.15) showed no significant changes in the sampled period. Although analyses stratified by sex showed that the allowance of municipal ordinances resulted in reduced SHS exposure in the workplace for both males and females, they did not affect the primary cigarette smoking rate as much, especially among females.ConclusionStrengthening the role of local governments by giving them the authority to enact and enforce penalties on SHS exposure violation helped ROK to reduce SHS exposure in the workplace. However, smoking behaviors and related activities seemed to shift to less restrictive areas such as on the streets and in apartment hallways, negating some of the effects due to these ordinances. Future studies should investigate how smoke-free policies beyond public places can further reduce the SHS exposure in ROK
Agnostic proper learning of monotone functions: beyond the black-box correction barrier
We give the first agnostic, efficient, proper learning algorithm for monotone
Boolean functions. Given uniformly random
examples of an unknown function , our
algorithm outputs a hypothesis that is
monotone and -close to , where
is the distance from to the closest monotone function. The running time of
the algorithm (and consequently the size and evaluation time of the hypothesis)
is also , nearly matching the lower bound
of Blais et al (RANDOM '15). We also give an algorithm for estimating up to
additive error the distance of an unknown function to
monotone using a run-time of . Previously,
for both of these problems, sample-efficient algorithms were known, but these
algorithms were not run-time efficient. Our work thus closes this gap in our
knowledge between the run-time and sample complexity.
This work builds upon the improper learning algorithm of Bshouty and Tamon
(JACM '96) and the proper semiagnostic learning algorithm of Lange, Rubinfeld,
and Vasilyan (FOCS '22), which obtains a non-monotone Boolean-valued
hypothesis, then ``corrects'' it to monotone using query-efficient local
computation algorithms on graphs. This black-box correction approach can
achieve no error better than
information-theoretically; we bypass this barrier by
a) augmenting the improper learner with a convex optimization step, and
b) learning and correcting a real-valued function before rounding its values
to Boolean.
Our real-valued correction algorithm solves the ``poset sorting'' problem of
[LRV22] for functions over general posets with non-Boolean labels
Learning and Control of Dynamical Systems
Despite the remarkable success of machine learning in various domains in recent years, our understanding of its fundamental limitations remains incomplete. This knowledge gap poses a grand challenge when deploying machine learning methods in critical decision-making tasks, where incorrect decisions can have catastrophic consequences. To effectively utilize these learning-based methods in such contexts, it is crucial to explicitly characterize their performance. Over the years, significant research efforts have been dedicated to learning and control of dynamical systems where the underlying dynamics are unknown or only partially known a priori, and must be inferred from collected data. However, much of these classical results have focused on asymptotic guarantees, providing limited insights into the amount of data required to achieve desired control performance while satisfying operational constraints such as safety and stability, especially in the presence of statistical noise.
In this thesis, we study the statistical complexity of learning and control of unknown dynamical systems. By utilizing recent advances in statistical learning theory, high-dimensional statistics, and control theoretic tools, we aim to establish a fundamental understanding of the number of samples required to achieve desired (i) accuracy in learning the unknown dynamics, (ii) performance in the control of the underlying system, and (iii) satisfaction of the operational constraints such as safety and stability. We provide finite-sample guarantees for these objectives and propose efficient learning and control algorithms that achieve the desired performance at these statistical limits in various dynamical systems. Our investigation covers a broad range of dynamical systems, starting from fully observable linear dynamical systems to partially observable linear dynamical systems, and ultimately, nonlinear systems.
We deploy our learning and control algorithms in various adaptive control tasks in real-world control systems and demonstrate their strong empirical performance along with their learning, robustness, and stability guarantees. In particular, we implement one of our proposed methods, Fourier Adaptive Learning and Control (FALCON), on an experimental aerodynamic testbed under extreme turbulent flow dynamics in a wind tunnel. The results show that FALCON achieves state-of-the-art stabilization performance and consistently outperforms conventional and other learning-based methods by at least 37%, despite using 8 times less data. The superior performance of FALCON arises from its physically and theoretically accurate modeling of the underlying nonlinear turbulent dynamics, which yields rigorous finite-sample learning and performance guarantees. These findings underscore the importance of characterizing the statistical complexity of learning and control of unknown dynamical systems.</p
Uncertainty Quantification in Machine Learning for Engineering Design and Health Prognostics: A Tutorial
On top of machine learning models, uncertainty quantification (UQ) functions
as an essential layer of safety assurance that could lead to more principled
decision making by enabling sound risk assessment and management. The safety
and reliability improvement of ML models empowered by UQ has the potential to
significantly facilitate the broad adoption of ML solutions in high-stakes
decision settings, such as healthcare, manufacturing, and aviation, to name a
few. In this tutorial, we aim to provide a holistic lens on emerging UQ methods
for ML models with a particular focus on neural networks and the applications
of these UQ methods in tackling engineering design as well as prognostics and
health management problems. Toward this goal, we start with a comprehensive
classification of uncertainty types, sources, and causes pertaining to UQ of ML
models. Next, we provide a tutorial-style description of several
state-of-the-art UQ methods: Gaussian process regression, Bayesian neural
network, neural network ensemble, and deterministic UQ methods focusing on
spectral-normalized neural Gaussian process. Established upon the mathematical
formulations, we subsequently examine the soundness of these UQ methods
quantitatively and qualitatively (by a toy regression example) to examine their
strengths and shortcomings from different dimensions. Then, we review
quantitative metrics commonly used to assess the quality of predictive
uncertainty in classification and regression problems. Afterward, we discuss
the increasingly important role of UQ of ML models in solving challenging
problems in engineering design and health prognostics. Two case studies with
source codes available on GitHub are used to demonstrate these UQ methods and
compare their performance in the life prediction of lithium-ion batteries at
the early stage and the remaining useful life prediction of turbofan engines
BFBrain: Scalar Bounded-From-Below Conditions from Bayesian Active Learning
We present a procedure leveraging Bayesian deep active learning to rapidly
produce highly accurate approximate bounded-from-below conditions for arbitrary
renormalizable scalar potentials, in the form of a neural network which may be
saved and exported for use in arbitrary parameter space scans. We explore the
performance of our procedure on three different scalar potentials with either
highly nontrivial or unknown symbolic bounded-from-below conditions (the
two-Higgs doublet model, the three-Higgs doublet model, and a version of the
Georgi-Machacek model without custodial symmetry). We find that we can produce
fast and highly accurate binary classifiers for all three potentials.
Furthermore, for the potentials for which no known symbolic necessary and
sufficient conditions on boundedness-from-below exist, our classifiers
substantially outperform some common approximate analytical methods, such as
producing tractable sufficient but not necessary conditions or evaluating
boundedness-from-below conditions for scenarios in which only a subset of the
theory's fields achieve vev's. Our methodology can be readily adapted to any
renormalizable scalar field theory. For the community's use, we have developed
a Python package, BFBrain, which allows for the rapid implementation of our
analysis procedure on user-specified scalar potentials with a high degree of
customizability.Comment: 33 pages and 13 figures, plus appendices. BFBrain package available
at https://github.com/Gwojci03/BFBrai
Next Generation Business Ecosystems: Engineering Decentralized Markets, Self-Sovereign Identities and Tokenization
Digital transformation research increasingly shifts from studying information systems within organizations towards adopting an ecosystem perspective, where multiple actors co-create value. While digital platforms have become a ubiquitous phenomenon in consumer-facing industries, organizations remain cautious about fully embracing the ecosystem concept and sharing data with external partners. Concerns about the market power of platform orchestrators and ongoing discussions on privacy, individual empowerment, and digital sovereignty further complicate the widespread adoption of business ecosystems, particularly in the European Union.
In this context, technological innovations in Web3, including blockchain and other distributed ledger technologies, have emerged as potential catalysts for disrupting centralized gatekeepers and enabling a strategic shift towards user-centric, privacy-oriented next-generation business ecosystems. However, existing research efforts focus on decentralizing interactions through distributed network topologies and open protocols lack theoretical convergence, resulting in a fragmented and complex landscape that inadequately addresses the challenges organizations face when transitioning to an ecosystem strategy that harnesses the potential of disintermediation.
To address these gaps and successfully engineer next-generation business ecosystems, a comprehensive approach is needed that encompasses the technical design, economic models, and socio-technical dynamics. This dissertation aims to contribute to this endeavor by exploring the implications of Web3 technologies on digital innovation and transformation paths. Drawing on a combination of qualitative and quantitative research, it makes three overarching contributions:
First, a conceptual perspective on \u27tokenization\u27 in markets clarifies its ambiguity and provides a unified understanding of the role in ecosystems.
This perspective includes frameworks on: (a) technological; (b) economic; and (c) governance aspects of tokenization.
Second, a design perspective on \u27decentralized marketplaces\u27 highlights the need for an integrated understanding of micro-structures, business structures, and IT infrastructures in blockchain-enabled marketplaces. This perspective includes: (a) an explorative literature review on design factors; (b) case studies and insights from practitioners to develop requirements and design principles; and (c) a design science project with an interface design prototype of blockchain-enabled marketplaces.
Third, an economic perspective on \u27self-sovereign identities\u27 (SSI) as micro-structural elements of decentralized markets. This perspective includes: (a) value creation mechanisms and business aspects of strategic alliances governing SSI ecosystems; (b) business model characteristics adopted by organizations leveraging SSI; and (c) business model archetypes and a framework for SSI ecosystem engineering efforts.
The dissertation concludes by discussing limitations as well as outlining potential avenues for future research. These include, amongst others, exploring the challenges of ecosystem bootstrapping in the absence of intermediaries, examining the make-or-join decision in ecosystem emergence, addressing the multidimensional complexity of Web3-enabled ecosystems, investigating incentive mechanisms for inter-organizational collaboration, understanding the role of trust in decentralized environments, and exploring varying degrees of decentralization with potential transition pathways
The Politics of Asceticism: An Analysis of the Political Spirituality of the Imperial Stoics
In recent decades a renewed focus on the Hellenistic and Roman philosophies has rehabilitated the practical aspect of ancient philosophy. This aspect of philosophy has continued to be part of the philosophical discipline but it has often been surpassed in importance and appreciation by philosophy’s theoretical discourse. With the increased focus on ancient philosophy’s practical outlook, an interesting question emerges: what does this practical outlook entail regarding how we interpret and analyse the political philosophy and political praxis of the ancient philosophers? In order to examine this question, this dissertation sheds light on Imperial Stoicism and examines this group of philosophers’ political philosophy in view of the concept of ‘political spirituality.’ The often-reiterated interpretation of Imperial Stoicism is that these philosophers were either entirely apolitical or that they, unlike their Hellenistic predecessors, were markedly conservative, reactionary, and generally supported the status quo of society despite an apparent subversive veneer. Both these interpretations are significantly questioned in this dissertation
- …