559 research outputs found

    Learning to Resolve Natural Language Ambiguities: A Unified Approach

    Full text link
    We analyze a few of the commonly used statistics based and machine learning algorithms for natural language disambiguation tasks and observe that they can be re-cast as learning linear separators in the feature space. Each of the methods makes a priori assumptions, which it employs, given the data, when searching for its hypothesis. Nevertheless, as we show, it searches a space that is as rich as the space of all linear separators. We use this to build an argument for a data driven approach which merely searches for a good linear separator in the feature space, without further assumptions on the domain or a specific problem. We present such an approach - a sparse network of linear separators, utilizing the Winnow learning algorithm - and show how to use it in a variety of ambiguity resolution problems. The learning approach presented is attribute-efficient and, therefore, appropriate for domains having very large number of attributes. In particular, we present an extensive experimental comparison of our approach with other methods on several well studied lexical disambiguation tasks such as context-sensitive spelling correction, prepositional phrase attachment and part of speech tagging. In all cases we show that our approach either outperforms other methods tried for these tasks or performs comparably to the best

    Dissortative From the Outside, Assortative From the Inside: Social Structure and Behavior in the Industrial Trade Network

    Full text link
    It is generally accepted that neighboring nodes in financial networks are negatively assorted with respect to the correlation between their degrees. This feature would play an important 'damping' role in the market during downturns (periods of distress) since this connectivity pattern between firms lowers the chances of auto-amplifying (the propagation of) distress. In this paper we explore a trade-network of industrial firms where the nodes are suppliers or buyers, and the links are those invoices that the suppliers send out to their buyers and then go on to present to their bank for discounting. The network was collected by a large Italian bank in 2007, from their intermediation of the sales on credit made by their clients. The network also shows dissortative behavior as seen in other studies on financial networks. However, when looking at the credit rating of the firms, an important attribute internal to each node, we find that firms that trade with one another share overwhelming similarity. We know that much data is missing from our data set. However, we can quantify the amount of missing data using information exposure, a variable that connects social structure and behavior. This variable is a ratio of the sales invoices that a supplier presents to their bank over their total sales. Results reveal a non-trivial and robust relationship between the information exposure and credit rating of a firm, indicating the influence of the neighbors on a firm's rating. This methodology provides a new insight into how to reconstruct a network suffering from incomplete information.Comment: 10 pages, 10 figures, To appear in conference proceedings of the IEEE: HICSS-4

    Incrementally Learning Objects by Touch: Online Discriminative and Generative Models for Tactile-Based Recognition

    Get PDF

    Inferentialism and knowledge: Brandom's arguments against reliabilism

    Get PDF
    I take issue with Robert Brandom’s claim that on an analysis of knowledge based on objective probabilities it is not possible to provide a stable answer to the question whether a belief has the status of knowledge. I argue that the version of the problem of generality developed by Brandom doesn’t undermine a truth-tracking account of noninferential knowledge that construes truth-tacking in terms of conditional probabilities. I then consider Sherrilyn Roush’s claim that an account of knowledge based on probabilistic tracking faces a version of the problem of generality. I argue that the problems she raises are specific to her account, and do not affect the version of the view that I have advanced. I then consider Brandom’s argument that the cases that motivate reliabilist epistemologies are in principle exceptional. I argue that he has failed to make a cogent case for this claim. I close with the suggestion that the representationalist approach to knowledge that I endorse and Brandom rejects is in principle compatible with the kind of pragmatist approach to belief and truth that both Brandom and I endorse

    Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review

    Get PDF
    E-discovery processes that use automated tools to prioritize and select documents for review are typically regarded as potential cost-savers – but inferior alternatives – to exhaustive manual review, in which a cadre of reviewers assesses every document for responsiveness to a production request, and for privilege. This Article offers evidence that such technology-assisted processes, while indeed more efficient, can also yield results superior to those of exhaustive manual review, as measured by recall and precision, as well as F1, a summary measure combining both recall and precision. The evidence derives from an analysis of data collected from the TREC 2009 Legal Track Interactive Task, and shows that, at TREC 2009, technology-assisted review processes enabled two participating teams to achieve results superior to those that could have been achieved through a manual review of the entire document collection by the official TREC assessors

    The Role of Jury in Modern Malpractice Law

    Get PDF
    This article explores the policy issues raised by the choice between a custom-based standard of care and a jury-determined reasonability standard. The author examines not only traditional legal arguments but also the recent findings of cognitive psychology, jury performance studies, and health industry research. Not surprisingly, this analysis reveals that both options are imperfect. However, the author cautiously recommends the reasonable physician standard. The revolutionary transformation of the health care industry in last quarter of a century has transferred considerable power from physicians to the health insurance industry, an industry that has not yet earned the privilege of self-regulation. Unlike the custom-based standard, the reasonable care standard assigns the task of standard-setting to representatives of the community and not to the regulated industry. And because the reasonable physician standard precludes unilateral establishment of the standard of care by the health care industry, it is also more likely to force the health care industry to engage the community in a conversation about health care cost and quality. For these reasons, it worth taking the risk that juries will be more resistant to cost control measures than health policy analysts would recommend

    Integrated decision-support framework for sustainable fleet implementation

    Get PDF
    Issues regarding fossil fuel depletion, climate change and air pollution associated with motorised urban transportation have motivated intensive research to find cleaner, greener, and energy-efficient alternative fuels. Alternative fuel vehicles have a pivotal role in moving towards a sustainable future, with many already deployed as public transport fleet. Unlike private vehicles, the process of evaluating and selecting the appropriate fuel technology for the taxi fleet, for instance, can be demanding due to the involvement of stakeholders with different, often conflicting objectives. While many life cycle models have been developed as decision-support tools for evaluating vehicle technologies and fuel pathways based on multiple criteria, the different perspectives of fleet operators, policymakers and vehicle manufacturers may create a barrier towards the adoption of eco-friendly low carbon fleet. At present, the search for one optimal solution that performs the best in all aspects is difficult to achieve in practice. Therefore, there is a need for an integrated tool that can align the different priorities of economic, environmental and social perspectives of decision makers. This research aims to develop a computer-based framework that can be used as a shared justification tool to support multi-stakeholder decision making. The main contribution is the implementation and applicability testing of the framework via a probabilistic life cycle analysis with satisficing model. The model was initially tested and evaluated by representative third-party users from the transport industry. When demonstrated in an illustrative taxi case study, results from the life cycle analysis show constant compensation and trade-offs between the criteria. Subsequently, this thesis provides an example of how the satisficing choice model seeks a satisfactory solution that adequately meets the multiple objectives of decision makers. Also, the research provides insights for other research and industry efforts in developing tools to support decision making towards sustainable development practices

    Evidence-Based Sentencing and the Scientific Rationalization of Discrimination

    Get PDF
    This Article critiques, on legal and empirical grounds, the growing trend of basing criminal sentences on actuarial recidivism risk prediction instruments that include demographic and socioeconomic variables. I argue that this practice violates the Equal Protection Clause and is bad policy: an explicit embrace of otherwise- condemned discrimination, sanitized by scientific language. To demonstrate that this practice raises serious constitutional concerns, I comprehensively review the relevant case law, much of which has been ignored by existing literature. To demonstrate that the policy is not justified by countervailing state interests, I review the empirical evidence underlying the instruments. I show that they provide wildly imprecise individual risk predictions, that there is no compelling evidence that they outperform judges\u27 informal predictions, that less discriminatory alternatives would likely perform as well, and that the instruments do not even address the right question: the effect of a given sentencing decision on recidivism risk. Finally, I also present new empirical evidence, based on a randomized experiment using fictional cases, suggesting that these instruments should not be expected merely to substitute actuarial predictions for less scientific risk assessments but instead to increase the weight given to recidivism risk versus other sentencing considerations
    • …
    corecore