1,245 research outputs found
Navigating University Bureaucracy for Social Change: Transgender & Gender-Nonconforming Students
In large university structures, bureaucracy serves to provide academic support and foster student success. Additionally, some argue that with the increasing view of universities as businesses, bureaucracy is ever-growing to serve as the ‘customer support’ for their students. Due to pressures for large university campuses to accommodate more and more students, the bureaucratic offices to serve those students are ever-increasing and ever-diversifying. Regardless of how one may view the purpose of bureaucracy, it has been lauded as an inefficient and frustrating necessity to navigating higher education. This paper will contain an analysis of a large Southern university campus, using the University of South Carolina (Columbia) as the campus of study. This paper will also focus on transgender and gender-nonconforming students as an oppressed subpopulation within large university structures
Adversarial Discriminative Domain Adaptation
Adversarial learning methods are a promising approach to training robust deep
networks, and can generate complex samples across diverse domains. They also
can improve recognition despite the presence of domain shift or dataset bias:
several adversarial approaches to unsupervised domain adaptation have recently
been introduced, which reduce the difference between the training and test
domain distributions and thus improve generalization performance. Prior
generative approaches show compelling visualizations, but are not optimal on
discriminative tasks and can be limited to smaller shifts. Prior discriminative
approaches could handle larger domain shifts, but imposed tied weights on the
model and did not exploit a GAN-based loss. We first outline a novel
generalized framework for adversarial adaptation, which subsumes recent
state-of-the-art approaches as special cases, and we use this generalized view
to better relate the prior approaches. We propose a previously unexplored
instance of our general framework which combines discriminative modeling,
untied weight sharing, and a GAN loss, which we call Adversarial Discriminative
Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably
simpler than competing domain-adversarial methods, and demonstrate the promise
of our approach by exceeding state-of-the-art unsupervised adaptation results
on standard cross-domain digit classification tasks and a new more difficult
cross-modality object classification task
Towards Adapting ImageNet to Reality: Scalable Domain Adaptation with Implicit Low-rank Transformations
Images seen during test time are often not from the same distribution as
images used for learning. This problem, known as domain shift, occurs when
training classifiers from object-centric internet image databases and trying to
apply them directly to scene understanding tasks. The consequence is often
severe performance degradation and is one of the major barriers for the
application of classifiers in real-world systems. In this paper, we show how to
learn transform-based domain adaptation classifiers in a scalable manner. The
key idea is to exploit an implicit rank constraint, originated from a
max-margin domain adaptation formulation, to make optimization tractable.
Experiments show that the transformation between domains can be very
efficiently learned from data and easily applied to new categories. This begins
to bridge the gap between large-scale internet image collections and object
images captured in everyday life environments
LSDA: Large Scale Detection Through Adaptation
A major challenge in scaling object detection is the difficulty of obtaining
labeled images for large numbers of categories. Recently, deep convolutional
neural networks (CNNs) have emerged as clear winners on object classification
benchmarks, in part due to training with 1.2M+ labeled classification images.
Unfortunately, only a small fraction of those labels are available for the
detection task. It is much cheaper and easier to collect large quantities of
image-level labels from search engines than it is to collect detection data and
label it with precise bounding boxes. In this paper, we propose Large Scale
Detection through Adaptation (LSDA), an algorithm which learns the difference
between the two tasks and transfers this knowledge to classifiers for
categories without bounding box annotated data, turning them into detectors.
Our method has the potential to enable detection for the tens of thousands of
categories that lack bounding box annotations, yet have plenty of
classification data. Evaluation on the ImageNet LSVRC-2013 detection challenge
demonstrates the efficacy of our approach. This algorithm enables us to produce
a >7.6K detector by using available classification data from leaf nodes in the
ImageNet tree. We additionally demonstrate how to modify our architecture to
produce a fast detector (running at 2fps for the 7.6K detector). Models and
software are available a
Active Domain Adaptation via Clustering Uncertainty-weighted Embeddings
Generalizing deep neural networks to new target domains is critical to their
real-world utility. In practice, it may be feasible to get some target data
labeled, but to be cost-effective it is desirable to select a
maximally-informative subset via active learning (AL). We study the problem of
AL under a domain shift, called Active Domain Adaptation (Active DA). We
empirically demonstrate how existing AL approaches based solely on model
uncertainty or diversity sampling are suboptimal for Active DA. Our algorithm,
Active Domain Adaptation via Clustering Uncertainty-weighted Embeddings
(ADA-CLUE), i) identifies target instances for labeling that are both uncertain
under the model and diverse in feature space, and ii) leverages the available
source and target data for adaptation by optimizing a semi-supervised
adversarial entropy loss that is complementary to our active sampling
objective. On standard image classification-based domain adaptation benchmarks,
ADA-CLUE consistently outperforms competing active adaptation, active learning,
and domain adaptation methods across domain shifts of varying severity
Detector Discovery in the Wild: Joint Multiple Instance and Representation Learning
We develop methods for detector learning which exploit joint training over
both weak and strong labels and which transfer learned perceptual
representations from strongly-labeled auxiliary tasks. Previous methods for
weak-label learning often learn detector models independently using latent
variable optimization, but fail to share deep representation knowledge across
classes and usually require strong initialization. Other previous methods
transfer deep representations from domains with strong labels to those with
only weak labels, but do not optimize over individual latent boxes, and thus
may miss specific salient structures for a particular category. We propose a
model that subsumes these previous approaches, and simultaneously trains a
representation and detectors for categories with either weak or strong labels
present. We provide a novel formulation of a joint multiple instance learning
method that includes examples from classification-style data when available,
and also performs domain transfer learning to improve the underlying detector
representation. Our model outperforms known methods on ImageNet-200 detection
with weak labels
- …