131,769 research outputs found

    Is Good Enough Good Enough For Swarthmore?

    Get PDF

    How Good is Good Enough?: Expert Evidence Under Daubert and Kuhmo

    Get PDF
    This essay is a response to Professor Edward Imwinkelried\u27s article, Should the Courts Incorporate a Best Evidence Rule into the Standard Determining the Admissibility of Scientific Testimony?: Enough is Enough When it is not the Best. The authors have two basic points. First, the authors wish to make it clear that they never proposed the best evidence rule that he so vigorously attacks, and they think his suggestion that they did so is strained. Second, they wish to reiterate that courts sometimes should do more than they have to ensure that expert testimony is reasonably sound. The important debate underway in the courts and the law reviews concerns the contours of the better evidence principle that the Supreme Court has placed between experts and the witness stand. The question that needs to be answered is this: How much better is goof enough

    Sometimes Close is Good Enough: The Value of Nearby Environmental Amenities

    Get PDF
    An extensive empirical literature exists showing that variations in region-specific amenities can account for persistent differences in real wages across regions. However, this literature has considered only amenities in the same location as the household. This paper argues that environmental amenities at some distance from but accessible to urban areas may lead to negative compensating wage differentials. We use a general equilibrium framework and data from the 1995 Current Population Survey to calculate implicit amenity prices based on measures of distance to environmental amenities. Our results suggest that amenities outside the metropolitan area do generate compensating wage differentials, as workers are willing to accept lower wages to live in accessible proximity to “nice” places. This implies that these places provide a positive externality to those communities that find them accessible. The estimated effects are quantitatively important, suggesting that these externalities should be taken into account in policy making.

    Measuring acceptable input: What is "good enough"?

    Get PDF
    Many new assistive input systems developed to meet the needs of users with functional impairments fail to make it out of the research laboratory and into regular use by the intended end users. This paper examines some of the reasons for this failure and focuses particularly on whether the developers of such systems are using the correct metrics and approaches for evaluating the functional and social attributes of the input systems they are designing. This paper further focuses on the importance of benchmarking new assistive input systems against baseline measures of useful interaction rates that take allowance of factors such as input success/recognition rate, error rate, correction effort and input time. By addressing each of these measures, a more complete understanding of whether an input system is practically and functionally acceptable can be obtained and design guidance for developers is provided

    MapReduce is Good Enough? If All You Have is a Hammer, Throw Away Everything That's Not a Nail!

    Full text link
    Hadoop is currently the large-scale data analysis "hammer" of choice, but there exist classes of algorithms that aren't "nails", in the sense that they are not particularly amenable to the MapReduce programming model. To address this, researchers have proposed MapReduce extensions or alternative programming models in which these algorithms can be elegantly expressed. This essay espouses a very different position: that MapReduce is "good enough", and that instead of trying to invent screwdrivers, we should simply get rid of everything that's not a nail. To be more specific, much discussion in the literature surrounds the fact that iterative algorithms are a poor fit for MapReduce: the simple solution is to find alternative non-iterative algorithms that solve the same problem. This essay captures my personal experiences as an academic researcher as well as a software engineer in a "real-world" production analytics environment. From this combined perspective I reflect on the current state and future of "big data" research

    Is GPT-4 good enough to evaluate jokes?

    Get PDF
    In this paper, we investigate the ability of large language models (LLMs), specifically GPT-4, to assess the funniness of jokes in comparison to human ratings. We use a dataset of jokes annotated with human ratings and explore different system descriptions in GPT-4 to imitate human judges with various types of humour. We propose a novel method to create a system description using many-shot prompting, providing numerous examples of jokes and their evaluation scores. Additionally, we examine the performance of different system descriptions when given varying amounts of instructions and examples on how to evaluate jokes. Our main contributions include a new method for creating a system description in LLMs to evaluate jokes and a comprehensive methodology to assess LLMs' ability to evaluate jokes using rankings rather than individual scores

    Explanatory Consolidation: From ‘Best’ to ‘Good Enough’

    Get PDF
    In science and everyday life, we often infer that something is true because it would explain some set of facts better than any other hypothesis we can think of. But what if we have reason to believe that there is a better way to explain these facts that we just haven't thought of? Wouldn't that undermine our warrant for believing the best available explanation? Many philosophers have assumed that we can solve such underconsideration problems by stipulating that a hypothesis should not only be 'the best' explanation available; rather, it should also be 'good enough'. Unfortunately, however, the only current suggestion for what it might mean to say that an explanation is 'good enough' is, well, not good enough. This paper aims to provide a better account of what is required for an explanatory hypothesis to be considered 'good enough'. In brief, the account holds that a `good enough' hypothesis is one that has gone through a process that I call explanatory consolidation, in which accumulating evidence and failed attempts to formulate better alternatives gradually make it more plausible that the explanation we currently have is better than any other that could be formulated

    A Service Conundrum: Can Outstanding Service Be Too Good?

    Get PDF
    Many service operations espouse the need to provide exceptional service. But sometimes good enough is good enough—especially if the alternative is uneven service
    corecore