1,764 research outputs found
To Build a Hero: Douglas MacArthur and the War That Wasn’t
This thesis argues that Douglas MacArthur, General of the Army and Commander in Chief of Allied Forces in the Southwest Pacific Area during the Second World War, and those acting under his purview, did knowingly and deliberately engage in a campaign of misinformation – during and after the war – with the intention of enhancing his reputation. The goal of this campaign was twofold: He would secure enough popular support to make him politically unassailable at the time and he would protect his legacy for posterity. Unlike previous surveys, which fail to hold MacArthur accountable for the deep and pervasive vein of propagandistic fallacy which he-and-his inserted into the historical narrative, this study puts lie to the defense that his actions were the innocent idiosyncrasies of a colorful eccentric, the aloofness of an old man, or the fault of loyal but unprompted subordinates.
Thorough examination of contemporary records and accounts are used to establish – beyond a reasonable doubt – that MacArthur understood both the reality of the situations in question and what he stood to gain by reporting otherwise. Analysis of the historiography concerning MacArthur was conducted and is herein summarized to establish both that his efforts were effective, pervasive, and distinct in both quantity and scope from the level of self-aggrandizement undertaken by his peers. As there exists far too much literature, both primary and secondary, on which this study could focus for comprehensive analysis in a work of this type, this study has focused primarily on two periods between December 1941 and May 1942 – the Clark Field Attack and the Evacuation from Corregidor – to establish a pattern of behavior demonstrative of conscious action, malicious and selfish intent, and tangible benefit. This work aims to serve as a realization – one nearly a century in the making – of the yearning by historians, servicemen, officials, victims, and voyeurs for a time and a method to declare openly that one of America’s most venerated heroes was a fraud.
This work is composed in hope that the glory and acclaim he stole might be returned to those whose blood bought the veneration with which he showered himself. It is written in hope that historians might free themselves from the fear of repercussions implicit in holding a man Franklin Delano Roosevelt once called “The most dangerous man in America” accountable for his lies. And it is published in the hope that the vainglorious denizens of the future may yet come to see Douglas MacArthur as a cautionary tale rather than a figure for emulation
Generic Programming with Extensible Data Types; Or, Making Ad Hoc Extensible Data Types Less Ad Hoc
We present a novel approach to generic programming over extensible data
types. Row types capture the structure of records and variants, and can be used
to express record and variant subtyping, record extension, and modular
composition of case branches. We extend row typing to capture generic
programming over rows themselves, capturing patterns including lifting
operations to records and variations from their component types, and the
duality between cases blocks over variants and records of labeled functions,
without placing specific requirements on the fields or constructors present in
the records and variants. We formalize our approach in System R{\omega}, an
extension of F{\omega} with row types, and give a denotational semantics for
(stratified) R{\omega} in Agda.Comment: To appear at: International Conference on Functional Programming 2023
Corrected citations from previous versio
Constrained Type Families
We present an approach to support partiality in type-level computation
without compromising expressiveness or type safety. Existing frameworks for
type-level computation either require totality or implicitly assume it. For
example, type families in Haskell provide a powerful, modular means of defining
type-level computation. However, their current design implicitly assumes that
type families are total, introducing nonsensical types and significantly
complicating the metatheory of type families and their extensions. We propose
an alternative design, using qualified types to pair type-level computations
with predicates that capture their domains. Our approach naturally captures the
intuitive partiality of type families, simplifying their metatheory. As
evidence, we present the first complete proof of consistency for a language
with closed type families.Comment: Originally presented at ICFP 2017; extended editio
Towards Races in Linear Logic
Process calculi based in logic, such as DILL and CP, provide a
foundation for deadlock-free concurrent programming, but exclude
non-determinism and races. HCP is a reformulation of CP which addresses a
fundamental shortcoming: the fundamental operator for parallel composition from
the -calculus does not correspond to any rule of linear logic, and
therefore not to any term construct in CP.
We introduce non-deterministic HCP, which extends HCP with a novel account of
non-determinism. Our approach draws on bounded linear logic to provide a
strongly-typed account of standard process calculus expressions of
non-determinism. We show that our extension is expressive enough to capture
many uses of non-determinism in untyped calculi, such as non-deterministic
choice, while preserving HCP's meta-theoretic properties, including deadlock
freedom
PoseBusters: AI-based docking methods fail to generate physically valid poses or generalise to novel sequences
The last few years have seen the development of numerous deep learning-based
protein-ligand docking methods. They offer huge promise in terms of speed and
accuracy. However, despite claims of state-of-the-art performance in terms of
crystallographic root-mean-square deviation (RMSD), upon closer inspection, it
has become apparent that they often produce physically implausible molecular
structures. It is therefore not sufficient to evaluate these methods solely by
RMSD to a native binding mode. It is vital, particularly for deep
learning-based methods, that they are also evaluated on steric and energetic
criteria. We present PoseBusters, a Python package that performs a series of
standard quality checks using the well-established cheminformatics toolkit
RDKit. Only methods that both pass these checks and predict native-like binding
modes should be classed as having "state-of-the-art" performance. We use
PoseBusters to compare five deep learning-based docking methods (DeepDock,
DiffDock, EquiBind, TankBind, and Uni-Mol) and two well-established standard
docking methods (AutoDock Vina and CCDC Gold) with and without an additional
post-prediction energy minimisation step using a molecular mechanics force
field. We show that both in terms of physical plausibility and the ability to
generalise to examples that are distinct from the training data, no deep
learning-based method yet outperforms classical docking tools. In addition, we
find that molecular mechanics force fields contain docking-relevant physics
missing from deep-learning methods. PoseBusters allows practitioners to assess
docking and molecular generation methods and may inspire new inductive biases
still required to improve deep learning-based methods, which will help drive
the development of more accurate and more realistic predictions.Comment: 10 pages, 6 figures, version 2 added an additional filter to the
PoseBusters Benchmark set to remove ligands with crystal contacts, version 3
corrected the description of the binding site used for Uni-Mo
PoseBusters: AI-based docking methods fail to generate physically valid poses or generalise to novel sequences
The last few years have seen the development of numerous deep learning-based protein-ligand docking methods. They offer huge promise in terms of speed and accuracy. However, despite claims of state-of-the-art performance in terms of crystallographic root-mean-square deviation (RMSD), upon closer inspection, it has become apparent that they often produce physically implausible molecular structures. It is therefore not sufficient to evaluate these methods solely by RMSD to a native binding mode. It is vital, particularly for deep learning-based methods, that they are also evaluated on steric and energetic criteria. We present PoseBusters, a Python package that performs a series of standard quality checks using the well-established cheminformatics toolkit RDKit. The PoseBusters test suite validates chemical and geometric consistency of a ligand including its stereochemistry, and the physical plausibility of intra- and intermolecular measurements such as the planarity of aromatic rings, standard bond lengths, and protein-ligand clashes. Only methods that both pass these checks and predict native-like binding modes should be classed as having "state-of-the-art" performance. We use PoseBusters to compare five deep learning-based docking methods (DeepDock, DiffDock, EquiBind, TankBind, and Uni-Mol) and two well-established standard docking methods (AutoDock Vina and CCDC Gold) with and without an additional post-prediction energy minimisation step using a molecular mechanics force field. We show that both in terms of physical plausibility and the ability to generalise to examples that are distinct from the training data, no deep learning-based method yet outperforms classical docking tools. In addition, we find that molecular mechanics force fields contain docking-relevant physics missing from deep-learning methods. PoseBusters allows practitioners to assess docking and molecular generation methods and may inspire new inductive biases still required to improve deep learning-based methods, which will help drive the development of more accurate and more realistic predictions
A simple semantics for Haskell overloading
As originally proposed, type classes provide overloading and ad-hoc
definition, but can still be understood (and implemented) in terms of strictly
parametric calculi. This is not true of subsequent extensions of type classes.
Functional dependencies and equality constraints allow the satisfiability of
predicates to refine typing; this means that the interpretations of equivalent
qualified types may not be interconvertible. Overlapping instances and instance
chains allow predicates to be satisfied without determining the implementations
of their associated class methods, introducing truly non-parametric behavior.
We propose a new approach to the semantics of type classes, interpreting
polymorphic expressions by the behavior of each of their ground instances, but
without requiring that those behaviors be parametrically determined. We argue
that this approach both matches the intuitive meanings of qualified types and
accurately models the behavior of programsComment: Originally presented at Haskell 201
Type Classes and Instance Chains: A Relational Approach
Type classes, first proposed during the design of the Haskell programming language, extend standard type systems to support overloaded functions. Since their introduction, type classes have been used to address a range of problems, from typing ordering and arithmetic operators to describing heterogeneous lists and limited subtyping. However, while type class programming is useful for a variety of practical problems, its wider use is limited by the inexpressiveness and hidden complexity of current mechanisms. We propose two improvements to existing class systems. First, we introduce several novel language features, instance chains and explicit failure, that increase the expressiveness of type classes while providing more direct expression of current idioms. To validate these features, we have built an implementation of these features, demonstrating their use in a practical setting and their integration with type reconstruction for a Hindley-Milner type system. Second, we define a set-based semantics for type classes that provides a sound basis for reasoning about type class systems, their implementations, and the meanings of programs that use them
Partial type constructors: Or, making ad hoc datatypes less ad hoc
This work is licensed under a Creative Commons Attribution 4.0 International License.Functional programming languages assume that type constructors are total. Yet functional programmers know better: counterexamples range from container types that make limiting assumptions about their contents (e.g., requiring computable equality or ordering functions) to type families with defining equations only over certain choices of arguments. We present a language design and formal theory of partial type constructors, capturing the domains of type constructors using qualified types. Our design is both simple and expressive: we support partial datatypes as first-class citizens (including as instances of parametric abstractions, such as the Haskell Functor and Monad classes), and show a simple type elaboration algorithm that avoids placing undue annotation burden on programmers. We show that our type system rejects ill-defined types and can be compiled to a semantic model based on System F. Finally, we have conducted an experimental analysis of a body of Haskell code, using a proof-of-concept implementation of our system; while there are cases where our system requires additional annotations, these cases are rarely encountered in practical Haskell code
- …