16,356 research outputs found
Large Margin Neural Language Model
We propose a large margin criterion for training neural language models.
Conventionally, neural language models are trained by minimizing perplexity
(PPL) on grammatical sentences. However, we demonstrate that PPL may not be the
best metric to optimize in some tasks, and further propose a large margin
formulation. The proposed method aims to enlarge the margin between the "good"
and "bad" sentences in a task-specific sense. It is trained end-to-end and can
be widely applied to tasks that involve re-scoring of generated text. Compared
with minimum-PPL training, our method gains up to 1.1 WER reduction for speech
recognition and 1.0 BLEU increase for machine translation.Comment: 9 pages. Accepted as a long paper in EMNLP201
One-Child Policy, Marriage Distortion, and Welfare Loss
Using plausibly exogenous variations in the ethnicity-specific assigned birth quotas and different fertility penalties across Chinese provinces over time, we provide new evidence for the transferable utility model by showing how China's One-Child Policy induced a significantly higher unmarried rate among the population and more interethnic marriages in China. We further develop the model and find that a policy-induced welfare loss originates from not only restricted fertility but also from marriage distortion, and both depend solely on the corresponding reduced-form elasticities. Our calculations suggest that the total welfare loss is around 4.9 percent of yearly household income, with marriage distortion contributing 17 percent of this welfare loss. These findings highlight the importance of taking into consideration the unintended behavioral responses to public policies and the corresponding social consequences
- …