1,226,717 research outputs found
How to Create an Oral History Program
The archival literature is full of calls to document under-represented voices, to create participatory archives, and to be an activist archivist. However, when funds and time are limited, these ideals can seem impossible to implement. What\u27s an archivist to do? One easy and affordable option is to create an oral history program. This workshop will give you the skills and the confidence to start an oral history program at your own institution. It will cover the main steps from performing preliminary research and developing questions all the way through thinking about how to promote and use your oral histories once they\u27ve been transcribed and edited. Participants will leave this workshop with a step-by-step plan to start an oral history program once they return to their institutions
Education choices in Mexico: using a structural model and a randomised experiment to evaluate PROGRESA
In this paper we evaluate the effect of a large welfare program in rural Mexico. For such a purpose we use an evaluation sample that includes a number of villages where the program was not implemented for evaluation purposes. We estimate a structural model of education choices and argue that without such a framework it is impossible to evaluate the effect of the program and, especially, possible changes to its structure. We also argue that the randomized component of the data allows us to identify a more flexible model that is better suited to evaluate the program. We find that the program has a positive effect on the enrollment of children, especially after primary school. We also find that an approximately revenue neutral change in the program that would increase the grant for secondary school children while eliminating for the primary school children would have a substantially larger effect on enrollment of the latter, while having minor effects on the former
An Editor for Helping Novices to Learn Standard ML
This paper describes a novel editor intended as an aid in the learning of the functional programming language Standard ML. A common technique used by novices is programming by analogy whereby students refer to similar programs that they have written before or have seen in the course literature and use these programs as a basis to write a new program. We present a novel editor for ML which supports programming by analogy by providing a collection of editing commands that transform old programs into new ones. Each command makes changes to an isolated part of the program. These changes are propagated to the rest of the program using analogical techniques. We observed a group of novice ML students to determine the most common programming errors in learning ML and restrict our editor such that it is impossible to commit these errors. In this way, students encounter fewer bugs and so their rate of learning increases. Our editor, C Y NTHIA, has been implemented and is due to be tested on st..
Probing quantum-classical boundary with compression software
We experimentally demonstrate that it is impossible to simulate quantum
bipartite correlations with a deterministic universal Turing machine. Our
approach is based on the Normalized Information Distance (NID) that allows the
comparison of two pieces of data without detailed knowledge about their origin.
Using NID, we derive an inequality for output of two local deterministic
universal Turing machines with correlated inputs. This inequality is violated
by correlations generated by a maximally entangled polarization state of two
photons. The violation is shown using a freely available lossless compression
program. The presented technique may allow to complement the common statistical
interpretation of quantum physics by an algorithmic one.Comment: 7 pages, 6 figure
Building Program Vector Representations for Deep Learning
Deep learning has made significant breakthroughs in various fields of
artificial intelligence. Advantages of deep learning include the ability to
capture highly complicated features, weak involvement of human engineering,
etc. However, it is still virtually impossible to use deep learning to analyze
programs since deep architectures cannot be trained effectively with pure back
propagation. In this pioneering paper, we propose the "coding criterion" to
build program vector representations, which are the premise of deep learning
for program analysis. Our representation learning approach directly makes deep
learning a reality in this new field. We evaluate the learned vector
representations both qualitatively and quantitatively. We conclude, based on
the experiments, the coding criterion is successful in building program
representations. To evaluate whether deep learning is beneficial for program
analysis, we feed the representations to deep neural networks, and achieve
higher accuracy in the program classification task than "shallow" methods, such
as logistic regression and the support vector machine. This result confirms the
feasibility of deep learning to analyze programs. It also gives primary
evidence of its success in this new field. We believe deep learning will become
an outstanding technique for program analysis in the near future.Comment: This paper was submitted to ICSE'1
The multi-program performance model: debunking current practice in multi-core simulation
Composing a representative multi-program multi-core workload is non-trivial. A multi-core processor can execute multiple independent programs concurrently, and hence, any program mix can form a potential multi-program workload. Given the very large number of possible multiprogram workloads and the limited speed of current simulation methods, it is impossible to evaluate all possible multi-program workloads. This paper presents the Multi-Program Performance Model (MPPM), a method for quickly estimating multiprogram multi-core performance based on single-core simulation runs. MPPM employs an iterative method to model the tight performance entanglement between co-executing programs on a multi-core processor with shared caches. Because MPPM involves analytical modeling, it is very fast, and it estimates multi-core performance for a very large number of multi-program workloads in a reasonable amount of time. In addition, it provides confidence bounds on its performance estimates. Using SPEC CPU2006 and up to 16 cores, we report an average performance prediction error of 2.3% and 2.9% for system throughput (STP) and average normalized turnaround time (ANTT), respectively, while being up to five orders of magnitude faster than detailed simulation. Subsequently, we demonstrate that randomly picking a limited number of multi-program workloads, as done in current pactice, can lead to incorrect design decisions in practical design and research studies, which is alleviated using MPPM. In addition, MPPM can be used to quickly identify multi-program workloads that stress multi-core performance through excessive conflict behavior in shared caches; these stress workloads can then be used for driving the design process further
- âŠ