17,342 research outputs found
Loss minimization yields multicalibration for large neural networks
Multicalibration is a notion of fairness that aims to provide accurate
predictions across a large set of groups. Multicalibration is known to be a
different goal than loss minimization, even for simple predictors such as
linear functions. In this note, we show that for (almost all) large neural
network sizes, optimally minimizing squared error leads to multicalibration.
Our results are about representational aspects of neural networks, and not
about algorithmic or sample complexity considerations. Previous such results
were known only for predictors that were nearly Bayes-optimal and were
therefore representation independent. We emphasize that our results do not
apply to specific algorithms for optimizing neural networks, such as SGD, and
they should not be interpreted as "fairness comes for free from optimizing
neural networks"
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
An exploration of the language within Ofsted reports and their influence on primary school performance in mathematics: a mixed methods critical discourse analysis
This thesis contributes to the understanding of the language of Ofsted reports, their similarity to one another and associations between different terms used within ‘areas for improvement’ sections and subsequent outcomes for pupils. The research responds to concerns from serving headteachers that Ofsted reports are overly similar, do not capture the unique story of their school, and are unhelpful for improvement. In seeking to answer ‘how similar are
Ofsted reports’ the study uses two tools, a plagiarism detection software (Turnitin) and a discourse analysis tool (NVivo) to identify trends within and across a large corpus of reports.
The approach is based on critical discourse analysis (Van Dijk, 2009; Fairclough, 1989) but shaped in the form of practitioner enquiry seeking power in the form of impact on pupils and practitioners, rather than a more traditional, sociological application of the method.
The research found that in 2017, primary school section 5 Ofsted reports had more than half of their content exactly duplicated within other primary school inspection reports published that same year. Discourse analysis showed the quality assurance process overrode variables such as inspector designation, gender, or team size, leading to three distinct patterns of duplication: block duplication, self-referencing, and template writing. The most unique part of a report was found to be the ‘area for improvement’ section, which was tracked to externally verified outcomes for pupils using terms linked to ‘mathematics’. Those
required to improve mathematics in their areas for improvement improved progress and attainment in mathematics significantly more than national rates. These findings indicate that there was a positive correlation between the inspection reporting process and a beneficial impact on pupil outcomes in mathematics, and that the significant similarity of one report to another had no bearing on the usefulness of the report for school improvement purposes
within this corpus
The Viability and Potential Consequences of IoT-Based Ransomware
With the increased threat of ransomware and the substantial growth of the Internet of Things (IoT) market, there is significant motivation for attackers to carry out IoT-based ransomware campaigns. In this thesis, the viability of such malware is tested.
As part of this work, various techniques that could be used by ransomware developers to attack commercial IoT devices were explored. First, methods that attackers could use to communicate with the victim were examined, such that a ransom note was able to be reliably sent to a victim. Next, the viability of using "bricking" as a method of ransom was evaluated, such that devices could be remotely disabled unless the victim makes a payment to the attacker. Research was then performed to ascertain whether it was possible to remotely gain persistence on IoT devices, which would improve the efficacy of existing ransomware methods, and provide opportunities for more advanced ransomware to be created. Finally, after successfully identifying a number of persistence techniques, the viability of privacy-invasion based ransomware was analysed.
For each assessed technique, proofs of concept were developed. A range of devices -- with various intended purposes, such as routers, cameras and phones -- were used to test the viability of these proofs of concept. To test communication hijacking, devices' "channels of communication" -- such as web services and embedded screens -- were identified, then hijacked to display custom ransom notes. During the analysis of bricking-based ransomware, a working proof of concept was created, which was then able to remotely brick five IoT devices. After analysing the storage design of an assortment of IoT devices, six different persistence techniques were identified, which were then successfully tested on four devices, such that malicious filesystem modifications would be retained after the device was rebooted. When researching privacy-invasion based ransomware, several methods were created to extract information from data sources that can be commonly found on IoT devices, such as nearby WiFi signals, images from cameras, or audio from microphones. These were successfully implemented in a test environment such that ransomable data could be extracted, processed, and stored for later use to blackmail the victim.
Overall, IoT-based ransomware has not only been shown to be viable but also highly damaging to both IoT devices and their users. While the use of IoT-ransomware is still very uncommon "in the wild", the techniques demonstrated within this work highlight an urgent need to improve the security of IoT devices to avoid the risk of IoT-based ransomware causing havoc in our society. Finally, during the development of these proofs of concept, a number of potential countermeasures were identified, which can be used to limit the effectiveness of the attacking techniques discovered in this PhD research
Likelihood Asymptotics in Nonregular Settings: A Review with Emphasis on the Likelihood Ratio
This paper reviews the most common situations where one or more regularity
conditions which underlie classical likelihood-based parametric inference fail.
We identify three main classes of problems: boundary problems, indeterminate
parameter problems -- which include non-identifiable parameters and singular
information matrices -- and change-point problems. The review focuses on the
large-sample properties of the likelihood ratio statistic. We emphasize
analytical solutions and acknowledge software implementations where available.
We furthermore give summary insight about the possible tools to derivate the
key results. Other approaches to hypothesis testing and connections to
estimation are listed in the annotated bibliography of the Supplementary
Material
Advancing Model Pruning via Bi-level Optimization
The deployment constraints in practical applications necessitate the pruning
of large-scale deep learning models, i.e., promoting their weight sparsity. As
illustrated by the Lottery Ticket Hypothesis (LTH), pruning also has the
potential of improving their generalization ability. At the core of LTH,
iterative magnitude pruning (IMP) is the predominant pruning method to
successfully find 'winning tickets'. Yet, the computation cost of IMP grows
prohibitively as the targeted pruning ratio increases. To reduce the
computation overhead, various efficient 'one-shot' pruning methods have been
developed, but these schemes are usually unable to find winning tickets as good
as IMP. This raises the question of how to close the gap between pruning
accuracy and pruning efficiency? To tackle it, we pursue the algorithmic
advancement of model pruning. Specifically, we formulate the pruning problem
from a fresh and novel viewpoint, bi-level optimization (BLO). We show that the
BLO interpretation provides a technically-grounded optimization base for an
efficient implementation of the pruning-retraining learning paradigm used in
IMP. We also show that the proposed bi-level optimization-oriented pruning
method (termed BiP) is a special class of BLO problems with a bi-linear problem
structure. By leveraging such bi-linearity, we theoretically show that BiP can
be solved as easily as first-order optimization, thus inheriting the
computation efficiency. Through extensive experiments on both structured and
unstructured pruning with 5 model architectures and 4 data sets, we demonstrate
that BiP can find better winning tickets than IMP in most cases, and is
computationally as efficient as the one-shot pruning schemes, demonstrating 2-7
times speedup over IMP for the same level of model accuracy and sparsity.Comment: Thirty-sixth Conference on Neural Information Processing Systems
(NeurIPS 2022
Comedians without a Cause: The Politics and Aesthetics of Humour in Dutch Cabaret (1966-2020)
Comedians play an important role in society and public debate. While comedians have been considered important cultural critics for quite some time, comedy has acquired a new social and political significance in recent years, with humour taking centre stage in political and social debates around issues of identity, social justice, and freedom of speech. To understand the shifting meanings and political implications of humour within a Dutch context, this PhD thesis examines the political and aesthetic workings of humour in the highly popular Dutch cabaret genre, focusing on cabaret performances from the 1960s to the present. The central questions of the thesis are: how do comedians use humour to deliver social critique, and how does their humour resonate with political ideologies? These questions are answered by adopting a cultural studies approach to humour, which is used to analyse Dutch cabaret performances, and by studying related materials such as reviews and media interviews with comedians. This thesis shows that, from the 1960s onwards, Dutch comedians have been considered ‘progressive rebels’ – politically engaged, subversive, and carrying a left-wing political agenda – but that this image is in need of correction. While we tend to look for progressive political messages in the work of comedians who present themselves as being anti-establishment rebels – such as Youp van ‘t Hek, Hans Teeuwen, and Theo Maassen – this thesis demonstrates that their transgressive and provocative humour tends to protect social hierarchies and relationships of power. Moreover, it shows that, paradoxically, both the deliberately moderate and nuanced humour of Wim Kan and Claudia de Breij, and the seemingly past-oriented nostalgia of Alex Klaasen, are more radical and progressive than the transgressive humour of van ‘t Hek, Teeuwen and Maassen. Finally, comedians who present absurdist or deconstructionist forms of humour, such as the early student cabarets, Freek de Jonge, and Micha Wertheim, tend to disassociate themselves from an explicit political engagement. By challenging the dominant image of the Dutch comedian as a ‘progressive rebel,’ this thesis contributes to a better understanding of humour in the present cultural moment, in which humour is often either not taken seriously, or one-sidedly celebrated as being merely pleasurable, innocent, or progressively liberating. In so doing, this thesis concludes, the ‘dark’ and more conservative sides of humour tend to get obscured
Minimum income support systems as elements of crisis resilience in Europe: Final Report
Mindestsicherungssysteme dienen in den meisten entwickelten Wohlfahrtsstaaten als Sicherheitsnetz letzter Instanz. Dementsprechend spielen sie gerade in wirtschaftlichen Krisenzeiten eine besondere Rolle. Inwieweit Mindestsicherungssysteme in Zeiten der Krise beansprucht werden, hängt auch von der Ausprägung vorgelagerter Sozialschutzsysteme ab. Diese Studie untersucht die Bedeutung von Systemen der Mindestsicherung sowie vorgelagerter Systeme wie Arbeitslosenversicherung, Kurzarbeit und arbeitsrechtlichem Bestandsschutz für die Krisenfestigkeit in Europa. Im Kontext der Finanzkrise von 2008/2009 und der Corona-Krise wird die Fähigkeit sozialpolitischer Maßnahmen untersucht, Armut und EinkommensÂverluste einzudämmen und gesellschaftliche Ausgrenzung zu vermeiden. Die Studie setzt dabei auf quantitative und qualitative Methoden, etwa multivariate Analysen, Mikrosimulationsmethoden sowie eingehende Fallstudien der Länder Dänemark, Frankreich, Irland, Polen und Spanien, die für unterschiedliche Typen von Wohlfahrtsstaaten stehen.The aim of this study is to analyse the role of social policies in different European welfare states regarding minimum income protection and active inclusion. The core focus lies on crisis resilience, i.e. the capacity of social policy arrangements to contain poverty and inequality and avoid exclusion before, during and after periods of economic shocks. To achieve this goal, the study expands its analytical focus to include other tiers of social protection, in particular upstream systems such as unemployment insurance, job retention and employment protection, as they play an additional and potentially prominent role in providing income and job protection in situations of crisis. A mixed-method approach is used that combines quantitative and qualitative research, such as descriptive and multivariate quantitative analyses, microsimulation methods and in-depth case studies. The study finds consistent differences in terms of crisis resilience across countries and welfare state types. In general, Nordic and Continental European welfare states with strong upstream systems and minimum income support (MIS) show better outcomes in core socio-economic outcomes such as poverty and exclusion risks. However, labour market integration shows some dualisms in Continental Europe. The study shows that MIS holds particular importance if there are gaps in upstream systems or cases of severe and lasting crises
The place where curses are manufactured : four poets of the Vietnam War
The Vietnam War was unique among American wars. To pinpoint its uniqueness, it was necessary to look for a non-American voice that would enable me to articulate its distinctiveness and explore the American character as observed by an Asian. Takeshi Kaiko proved to be most helpful. From his novel, Into a Black Sun, I was able to establish a working pair of 'bookends' from which to approach the poetry of Walter McDonald, Bruce Weigl, Basil T. Paquet and Steve Mason. Chapter One is devoted to those seemingly mismatched 'bookends,' Walt Whitman and General William C. Westmoreland, and their respective anthropocentric and technocentric visions of progress and the peculiarly American concept of the "open road" as they manifest themselves in Vietnam. In Chapter, Two, I analyze the war poems of Walter McDonald. As a pilot, writing primarily about flying, his poetry manifests General Westmoreland's technocentric vision of the 'road' as determined by and manifest through technology. Chapter Three focuses on the poems of Bruce Weigl. The poems analyzed portray the literal and metaphorical descent from the technocentric, 'numbed' distance of aerial warfare to the world of ground warfare, and the initiation of a 'fucking new guy,' who discovers the contours of the self's interior through a set of experiences that lead from from aerial insertion into the jungle to the degradation of burning human
feces. Chapter Four, devoted to the thirteen poems of Basil T. Paquet, focuses on the continuation of the descent begun in Chapter Two. In his capacity as a medic, Paquet's entire body of poems details his quotidian tasks which entail tending the maimed, the mortally wounded and the dead. The final chapter deals with Steve Mason's JohnnY's Song, and his depiction of the plight of Vietnam veterans back in "The World" who are still trapped inside the interior landscape of their individual "ghettoes" of the soul created by their war-time experiences
Learning disentangled speech representations
A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody.
The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions.
In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks.
This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically
- …