261 research outputs found
Interpreting Deep Visual Representations via Network Dissection
The success of recent deep convolutional neural networks (CNNs) depends on
learning hidden representations that can summarize the important factors of
variation behind the data. However, CNNs often criticized as being black boxes
that lack interpretability, since they have millions of unexplained model
parameters. In this work, we describe Network Dissection, a method that
interprets networks by providing labels for the units of their deep visual
representations. The proposed method quantifies the interpretability of CNN
representations by evaluating the alignment between individual hidden units and
a set of visual semantic concepts. By identifying the best alignments, units
are given human interpretable labels across a range of objects, parts, scenes,
textures, materials, and colors. The method reveals that deep representations
are more transparent and interpretable than expected: we find that
representations are significantly more interpretable than they would be under a
random equivalently powerful basis. We apply the method to interpret and
compare the latent representations of various network architectures trained to
solve different supervised and self-supervised training tasks. We then examine
factors affecting the network interpretability such as the number of the
training iterations, regularizations, different initializations, and the
network depth and width. Finally we show that the interpreted units can be used
to provide explicit explanations of a prediction given by a CNN for an image.
Our results highlight that interpretability is an important property of deep
neural networks that provides new insights into their hierarchical structure.Comment: *B. Zhou and D. Bau contributed equally to this work. 15 pages, 27
figure
South Dakota Beef Industry
The beef industry in South Dakota is an important component of the state\u27s agricultural economy. South Dakota beef producers market approximately 2.0 million head of cattle and calves annually with value in excess of 1.2 billion dollars in 1984. This revenue represents over 60 percent of total livestock receipts for the state and over 35 percent of total agricultural sales. In 1985, South Dakota cattle gross income was over $1,336 million. The significance of South Dakota cattle production is further demonstrated by a national ranking of fifth in beef cows that calved and ninth in total production of cattle and calves in 1985. Cattle numbers are declining both on a national and state level, declining from a national total of 114.4 million head at the end of 1980 to 105.5 million head at the end of 1985. South Dakota cattle numbers declined from 4.1 million head to 3.6 million head over the same five year period. Consumption of beef per capita has held fairly constant since 1978 at 77-80 retail pounds and is presently around 77 pounds per capita. Even with the declining numbers of cattle and consumption remaining constant, price has not increased enough to stop the reduction phase of the present cycle. In fiscal year 1985, 1,499,489 head of cattle were shipped out of South Dakota with only 477,167 head of cattle coming in, leaving a net out flow of 1,022,322 head. State inventories were down slightly. This leaves the South Dakota cattle producer dependent on out-of-state cattle demands to absorb the net flow of cattle out of South Dakota. The total number of packing plants in the United States decreased from a peak in 1976 of 6,255 plants to 5,558 at the end of 1983. Average plant size is increasing, reflecting closings of small plants through the last decade. U.S. beef slaughter is shifting west and south. The West North Central and Southern Plains regions reported a 12 percent increase in the proportion of cattle slaughtered there between 1972 and 1982. This indicates a shirt in slaughter away from plants located near large urban areas in East North Central and Eastern regions of the nation to plants located close to cattle production areas. This shift in slaughter plant location parallels the westward movement of cattle feeding. Today, plants are increasing the production of boxed beef and decreasing the production of whole carcass beef. Processing beef into boxed beef increased from 44 percent to 58 percent of all steer and heifer slaughter between 1979 and 1982. This study was conducted to update existing information on the South Dakota cattle industry at the producer, feeder, slaughter, and processor levels and to examine construction and operating costs of South Dakota beef slaughter plants
Electro-worming: The Behaviors of Caenorhabditis (C.) elegans in DC and AC Electric Fields
The video showcases how C. elegans worms respond to DC and AC electrical
stimulations. Gabel et al (2007) demonstrated that in the presence of DC and
low frequency AC fields, worms of stage L2 and larger propel themselves towards
the cathode. Rezai et al (2010) have demonstrated that this phenomenon, dubbed
electrotaxis, can be used to control the motion of worms. In the video, we
reproduce Rezai's experimental results. Furthermore, we show, for the first
time, that worms can be trapped with high frequency, nonuniform electric
fields. We studied the effect of the electric field on the nematode as a
function of field intensity and frequency and identified a range of electric
field intensities and frequencies that trap worms without apparent adverse
effect on their viability. Worms tethered by dielectrophoresis (DEP) avoid blue
light, indicating that at least some of the nervous system functions remain
unimpaired in the presence of the electric field. DEP is useful to dynamically
confine nematodes for observations, sort them according to size, and separate
dead worms from live ones.Comment: Two videos are included. The videos have been uploaded on
eCommons@Cornell. The link address is as follow:
http://ecommons.library.cornell.edu/handle/1813/1410
Semantic Photo Manipulation with a Generative Image Prior
Despite the recent success of GANs in synthesizing images conditioned on
inputs such as a user sketch, text, or semantic labels, manipulating the
high-level attributes of an existing natural photograph with GANs is
challenging for two reasons. First, it is hard for GANs to precisely reproduce
an input image. Second, after manipulation, the newly synthesized pixels often
do not fit the original image. In this paper, we address these issues by
adapting the image prior learned by GANs to image statistics of an individual
image. Our method can accurately reconstruct the input image and synthesize new
content, consistent with the appearance of the input image. We demonstrate our
interactive system on several semantic image editing tasks, including
synthesizing new objects consistent with background, removing unwanted objects,
and changing the appearance of an object. Quantitative and qualitative
comparisons against several existing methods demonstrate the effectiveness of
our method.Comment: SIGGRAPH 201
An Alternative to Regulation: The Case for Public AI
Can governments build AI? In this paper, we describe an ongoing effort to
develop ``public AI'' -- publicly accessible AI models funded, provisioned, and
governed by governments or other public bodies. Public AI presents both an
alternative and a complement to standard regulatory approaches to AI, but it
also suggests new technical and policy challenges. We present a roadmap for how
the ML research community can help shape this initiative and support its
implementation, and how public AI can complement other responsible AI
initiatives.Comment: To be presented at Regulatable ML @ NeurIPS2023 worksho
Unified Concept Editing in Diffusion Models
Text-to-image models suffer from various safety issues that may limit their
suitability for deployment. Previous methods have separately addressed
individual issues of bias, copyright, and offensive content in text-to-image
models. However, in the real world, all of these issues appear simultaneously
in the same model. We present a method that tackles all issues with a single
approach. Our method, Unified Concept Editing (UCE), edits the model without
training using a closed-form solution, and scales seamlessly to concurrent
edits on text-conditional diffusion models. We demonstrate scalable
simultaneous debiasing, style erasure, and content moderation by editing
text-to-image projections, and we present extensive experiments demonstrating
improved efficacy and scalability over prior work. Our code is available at
https://unified.baulab.inf
- …