1,687 research outputs found
Secrecy and Intelligence: Introduction
The catalyst for this special issue of Secrecy and Society stems from a workshop titled “Secrecy and Intelligence: Opening the Black Box” at North Carolina State University, April, 2016. This workshop brought together interested scholars, intelligence practitioners, and civil society members from the United States and Europe to discuss how different facets of secrecy and other practices shape the production of knowledge in intelligence work. This dialogue aimed to be reflective on how the closed social worlds of intelligence shape what intelligence actors and intelligence analysts, who include those within the intelligence establishment and those on the outside, know about security threats and the practice of intelligence. The papers in this special issue reflect conversations that occurred during and after the workshop
Fast Bunch Integrators at Fermilab During Run II
The Fast Bunch Integrator is a bunch intensity monitor designed around the
measurements made from Resistive Wall Current Monitors. During the Run II
period these were used in both Tevatron and Main Injector for single and
multiple bunch intensity measurements. This paper presents an overview of the
design and use of these systems during this period.Comment: 6 p
An NMF-Based Building Block for Interpretable Neural Networks With Continual Learning
Existing learning methods often struggle to balance interpretability and
predictive performance. While models like nearest neighbors and non-negative
matrix factorization (NMF) offer high interpretability, their predictive
performance on supervised learning tasks is often limited. In contrast, neural
networks based on the multi-layer perceptron (MLP) support the modular
construction of expressive architectures and tend to have better recognition
accuracy but are often regarded as black boxes in terms of interpretability.
Our approach aims to strike a better balance between these two aspects through
the use of a building block based on NMF that incorporates supervised neural
network training methods to achieve high predictive performance while retaining
the desirable interpretability properties of NMF. We evaluate our Predictive
Factorized Coupling (PFC) block on small datasets and show that it achieves
competitive predictive performance with MLPs while also offering improved
interpretability. We demonstrate the benefits of this approach in various
scenarios, such as continual learning, training on non-i.i.d. data, and
knowledge removal after training. Additionally, we show examples of using the
PFC block to build more expressive architectures, including a fully-connected
residual network as well as a factorized recurrent neural network (RNN) that
performs competitively with vanilla RNNs while providing improved
interpretability. The PFC block uses an iterative inference algorithm that
converges to a fixed point, making it possible to trade off accuracy vs
computation after training but also currently preventing its use as a general
MLP replacement in some scenarios such as training on very large datasets. We
provide source code at https://github.com/bkvogel/pfcComment: 42 pages, 13 figure
Transmission and performance of taiko in Edo Bayashi, Hachijo, and modern kumi-daiko styles
This document is a study of the history, instruments, transmission method and performance practices of three types of Japanese taiko drumming. Included are transcriptions of representative pieces, several of which have never been written down in Western notation, as taiko is generally an orally transmitted musical form. Field research was done throughout the summers of 2007 and 2008 with renowned taiko artist Kenny Endo at the Taiko Center of the Pacific in Honolulu, Hawaii
- …