4,503 research outputs found

    ADADELTA: An Adaptive Learning Rate Method

    Full text link
    We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.Comment: 6 page

    Empirical Health Law Scholarship: The State of the Field

    Get PDF
    The last three decades have seen the blossoming of the fields of health law and empirical legal studies and their intersection--empirical scholarship in health law and policy. Researchers in legal academia and other settings have conducted hundreds of studies using data to estimate the effects of health law on accident rates, health outcomes, health care utilization, and costs, as well as other outcome variables. Yet the emerging field of empirical health law faces significant challenges--practical, methodological, and political. The purpose of this Article is to survey the current state of the field by describing commonly used methods, analyzing enabling and inhibiting factors in the production and uptake of this type of research by policymakers, and suggesting ways to increase the production and impact of empirical health law studies. In some areas of inquiry, high-quality research has been conducted, and the findings have been successfully imported into policy debates and used to inform evidence-based lawmaking. In other areas, the level of rigor has been uneven, and the best evidence has not translated effectively into sound policy. Despite challenges and historical shortcomings, empirical health law studies can and should have a substantial impact on regulations designed to improve public safety, increase both access to and quality of health care, and foster technological innovation

    Common-Law Disclosure Duties and the Sin of Omission: Testing the Meta-Theories

    Get PDF
    This Article represents the first attempt to study empirically the factors that cause courts to impose disclosure duties on bargaining parties in some circumstances, but not in others. We analyze data coded from 466 decisions spanning a wide array of jurisdictions and covering over two hundred years. The results are mixed. In some instances our data support the conventional wisdom relating to common-law disclosure duties. For example, we find that courts are more likely to require the disclosure of latent, as opposed to patent, defects and are more likely to require disclosure when the parties are in a fiduciary or confidential relationship. In other instances, our results cast doubt on much of the conventional wisdom regarding the law of fraudulent silence. First, although it is generally understood that courts have become more likely to impose disclosure duties over time, we find that courts actually have become less likely over time to impose duties to disclose. Second, and perhaps most importantly, we find that courts are no more likely to impose disclosure duties when the information is casually acquired as opposed to deliberately acquired, and that unequal access to information by the contracting parties is not a significant factor that drives courts to find a duty to disclose. We do find, however, that when both factors are present courts are significantly more likely to force disclosure

    Visualizing and Understanding Convolutional Networks

    Full text link
    Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky \etal on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets

    The willingness to pay-willingness to accept gap, the "endowment effect," subject misconceptions, and experimental procedures for eliciting valuations

    Get PDF
    We conduct experiments to explore the possibility that subject misconceptions, as opposed to a particular theory of preferences referred to as the “endowment effect,” account for reported gaps between willingness to pay (“WTP”) and willingness to accept (“WTA”). The literature reveals two important facts. First, there is no consensus regarding the nature or robustness of WTP-WTA gaps. Second, while experimenters are careful to control for subject misconceptions, there is no consensus about the fundamental properties of misconceptions or how to avoid them. Instead, by implementing different types of experimental controls, experimenters have revealed notions of how misconceptions arise. Experimenters have applied these controls separately or in different combinations. Such controls include ensuring subject anonymity, using incentive-compatible elicitation mechanisms, and providing subjects with practice and training on the elicitation mechanism before employing it to measure valuations. The pattern of results reported in the literature suggests that the widely differing reports of WTP-WTA gaps could be due to an incomplete science regarding subject misconceptions. We implement a “revealed theory” methodology to compensate for the lack of a theory of misconceptions. Theories implicit in experimental procedures found in the literature are at the heart of our experimental design. Thus, our approach to addressing subject misconceptions reflects an attempt to control simultaneously for all dimensions of concern over possible subject misconceptions found in the literature. To this end, our procedures modify the Becker-DeGroot-Marschak mechanism used in previous studies to elicit values. In addition, our procedures supplement commonly used procedures by providing extensive training on the elicitation mechanism before subjects provide WTP and WTA responses. Experiments were conducted using both lotteries and mugs, goods frequently used in endowment effect experiments. Using the modified procedures, we observe no gap between WTA and WTP. Therefore, our results call into question the interpretation of observed gaps as evidence of loss aversion or prospect theory. Further evidence is required before convincing interpretations of observed gaps can be advanced

    Exchange Asymmetries Incorrectly Interpreted as Evidence of Endowment Effect Theory and Prospect Theory?

    Get PDF
    Systematic asymmetries in exchange behavior have been widely interpreted as support for "endowment effect theory," an application of prospect theory positing that loss aversion and utility function kinks set by entitlements explain observed asymmetries. We experimentally test an alternative explanation, namely, that asymmetries are explained by classical preference theories finding influence through the experimental procedures typically used. Contrary to the predictions of endowment effect theory, we observe no asymmetries when we modify procedures to remove the influence of classical preference theories. When we return to traditional-type procedures, however, the asymmetries reappear. The results support explanations based in classical preference theories and reject endowment effect theory

    PRNU-based image classification of origin social network with CNN

    Get PDF
    A huge amount of images are continuously shared on social networks (SNs) daily and, in most of cases, it is very difficult to reliably establish the SN of provenance of an image when it is recovered from a hard disk, a SD card or a smartphone memory. During an investigation, it could be crucial to be able to distinguish images coming directly from a photo-camera with respect to those downloaded from a social network and possibly, in this last circumstance, determining which is the SN among a defined group. It is well known that each SN leaves peculiar traces on each content during the upload-download process; such traces can be exploited to make image classification. In this work, the idea is to use the PRNU, embedded in every acquired images, as the “carrier” of the particular SN traces which diversely modulate the PRNU. We demonstrate, in this paper, that SN-modulated noise residual can be adopted as a feature to detect the social network of origin by means of a trained convolutional neural network (CNN)
    corecore